Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

zubikov

macrumors 6502
Original poster
Sep 3, 2014
385
1,303
PA
I have a feeling that LIDAR will be much more important to photography than any AR/VR uses when iPhone 12 pros are launched. The current "bokeh" effect emulation, or the sense of creating an artificial depth of field, is extremely complex and often flawed. The limitation is that one camera is used to "guess" the depth at each pixel, while the other takes the picture.

The "guessing" of the depth is all computational and highly imperfect. Traditional optics blow the simulated bokeh out of the water when compared side-by-side. LIDAR lends itself as a perfect replacement for guessing depth. You would have a censor dedicated to nearly-perfectly guessing the depth at each pixel. This way you can effectively have 3 prime lenses on your phone, with the ability to simulate bokeh very well after the picture is taken.

Anyone else excited for this potential use of LIDAR tech? It could truly be a camera game changer that gets people to stop using their DSLRs even more.
 
LIDAR isn't that good for what you're describing.
There's not nearly enough data points for the LIDAR in the iPad/iPhone to correctly do depth mapping for Bokeh pictures. Using multiple lenses + software to do Bokeh will be way more accurate than using the LIDAR scanner. LIDAR is more for mapping out rooms and stuff like that so AR objects can be placed way more accurately.
 
ToF/LIDAR sensors have been common on Android smartphones for a couple years now. It’s nothing revolutionary.

The reason ToF is so common is because it’s a low cost solution, not because it’s precise. The only main benefit of ToF is it allows for better focus in low light situations.

The ToF sensor used in iPad Pro has a 0.03MP resolution. That alone tells you what the primary purpose is.
 
I have a feeling that LIDAR will be much more important to photography than any AR/VR uses when iPhone 12 pros are launched. The current "bokeh" effect emulation, or the sense of creating an artificial depth of field, is extremely complex and often flawed. The limitation is that one camera is used to "guess" the depth at each pixel, while the other takes the picture.

The "guessing" of the depth is all computational and highly imperfect. Traditional optics blow the simulated bokeh out of the water when compared side-by-side. LIDAR lends itself as a perfect replacement for guessing depth. You would have a censor dedicated to nearly-perfectly guessing the depth at each pixel. This way you can effectively have 3 prime lenses on your phone, with the ability to simulate bokeh very well after the picture is taken.

Anyone else excited for this potential use of LIDAR tech? It could truly be a camera game changer that gets people to stop using their DSLRs even more.

I think we should talk about the bokeh effect and how Apple is producing it first. There is definitely room for improvement however photography is an art form and the camera is just the instrument. People often complain about certain aspects of a camera without any self reflection or attempts to improve their own skillset. Literally complaining the camera is crap because of its can't compensate for their inability to use it. Apple is around half of billion dollars invested into machine learning due to the demand of peoples unwillingness to practice human learning. I digress.

The bokeh effect is the blur of the foreground and background due to a shallow depth of field produced by a camera lens by adjusting zoom and aperture. A subject in that narrow focal area is highlighted due to the contrast in focus. The results are appealing for some reason. Everything in the depth of field is in focus so there are no weird hair artifacts, no pieces of background in focus, no pieces of the focal area out of focus, its otherwise perfect.

Apple takes a bunch of existing technologies and adds their own spices to simulate the effect which is infinitely more difficult. Apple uses iPhones with 2 cameras and/or TrueDepth camera to create a depth map, the subject is identified and segmented into the sum of their parts...

Screen Shot 2020-08-29 at 9.47.00 PM.png


Machine learning models segment the subject because portrait mode adds a realistic lighting effect. Light interacts differently hair, teeth, skin etc and the color of those things also changes how light appears. This is the part that impresses me the most about portrait mode because it looks so good. The ability to simulate light is a game changer in my opinion and capturing light is the fundamental aspect of photography.

The problem is where we are setting the bar since these are photos. If we expect Apple to simulate reality better then actual reality then that is not only unrealistic its unreasonable. An iPhone won't be able to fake reality as well as a DSLR can just show it.

I actually think Apple is on the right track and LIDAR is not only not necessary but its a step backward.

Actual calculated distance isn't a very useful metric. Apples depth map needs depth between points at the highest resolution possible to capture detail. It doesn't matter if a tree in the background is 10 meters or 100 meters away as long as the camera knows its out of the depth of field.

As LIDAR leave a single point (smartphone) to cover the photos field of view it would only be a couple feet before the points (laser matrix) has gap between them large enough to miss useful data. Fine detail like hair would be mostly lost if there was no other source for the depth map. BTW I'm not saying there is NO use for it just not in this particular application.

Keeping in mind resolution is key when simulating a bokeh effect because a real cameras focal range is perfect it can't miss an object in that focal range and put it out of focus. What Apple does to create a depth map is capture with 2 cameras simultaneously use the subject as the point of reference and compare the images for see the parallax shift between pixels/points. Accuracy for distance isn't the best at 1 meter per pixel/point however for a simulated bokeh effect thats just fine. And the beauty of the bokeh effect is the blur hides errors.

This is a photo of me represented by an Apple depth map and then processed from the background which did better that I anticipated.


IMG_1307.jpg
IMG_1306.jpg



I intentionally used low lightning (that fog) to show some of the flaws which is the fuzziness on my arms. The actual image my arms look like they are covered in dirt and the thing on my left shoulder (your right) is a coffee maker behind me and its in focus on the image because it was assumed to be part of me or my clothing.

There is clearly room for improvement especially knowing that Google devices has produced amazing results with a single lens. Plus Apples machine learning is still in its infancy too.

Again the complaints I see from people are based photos they see that are produced by a professional/hobbyist (ie someone skilled, talented and/or educated in photography) using a DSLR w/ lens, lighting and accessories that are 2-10x as much as the iPhone spending half the day capturing the image and the other half editing it. Considering the results the iPhone can produce by otherwise simulating reality in 15-30 seconds is just impressive, at least to me it is. Portrait mode makes me look good and thats a god damn miracle in and of itself!

To offer some tips for portrait mode to anyone that cares. Make sure there isn't a light directly behind your subjects head, this will typically draw attention to oddities in those fine details. I see this a lot in public but don't bother using portrait mode with the subject backed up to something like a wall, that negates the bokeh effect. Keep in mind Apple is simulating depth of field, your subject should be in the foreground, unlike a shallow depth of field on a DSLR something in the foreground can become the focal point in portrait mode. When you get a chance to review your photos (needs to be less than 30 days before the metadata is deleted) if you see artifacts from the bokeh effect switch the photo lighting to 'stage light' and reduce f-stop until its better then switch lighting to 'high key light mono' and check and adjust it then switch it back to your preferred lighting. Its better to minimize the bokeh effect if it improves the overall quality. Use the lighting effect to soften peoples skin, adjust it up and down and you'll see what I mean. Not particular to portrait mode but have good lighting the harder it is for you to distinguish foreground and background the harder it is for the iPhone. Dont let the subject stand directly under a light where their facial features are casting shadows onto their face. If its sunny put the subjects back at an off angle to the sun this way the harsh light isn't in there face and the sun isn't under exposing the subject because its in frame. Edit your photos on a Mac/PC, gimp is a free and open source image editor available for PC/Mac/Linux that can do 90% of the stuff Photoshop can for the casual user.
 
I think we should talk about the bokeh effect and how Apple is producing it first. There is definitely room for improvement however photography is an art form and the camera is just the instrument. People often complain about certain aspects of a camera without any self reflection or attempts to improve their own skillset. Literally complaining the camera is crap because of its can't compensate for their inability to use it. Apple is around half of billion dollars invested into machine learning due to the demand of peoples unwillingness to practice human learning. I digress.

The bokeh effect is the blur of the foreground and background due to a shallow depth of field produced by a camera lens by adjusting zoom and aperture. A subject in that narrow focal area is highlighted due to the contrast in focus. The results are appealing for some reason. Everything in the depth of field is in focus so there are no weird hair artifacts, no pieces of background in focus, no pieces of the focal area out of focus, its otherwise perfect.

Apple takes a bunch of existing technologies and adds their own spices to simulate the effect which is infinitely more difficult. Apple uses iPhones with 2 cameras and/or TrueDepth camera to create a depth map, the subject is identified and segmented into the sum of their parts...

View attachment 948732

Machine learning models segment the subject because portrait mode adds a realistic lighting effect. Light interacts differently hair, teeth, skin etc and the color of those things also changes how light appears. This is the part that impresses me the most about portrait mode because it looks so good. The ability to simulate light is a game changer in my opinion and capturing light is the fundamental aspect of photography.

The problem is where we are setting the bar since these are photos. If we expect Apple to simulate reality better then actual reality then that is not only unrealistic its unreasonable. An iPhone won't be able to fake reality as well as a DSLR can just show it.

I actually think Apple is on the right track and LIDAR is not only not necessary but its a step backward.

Actual calculated distance isn't a very useful metric. Apples depth map needs depth between points at the highest resolution possible to capture detail. It doesn't matter if a tree in the background is 10 meters or 100 meters away as long as the camera knows its out of the depth of field.

As LIDAR leave a single point (smartphone) to cover the photos field of view it would only be a couple feet before the points (laser matrix) has gap between them large enough to miss useful data. Fine detail like hair would be mostly lost if there was no other source for the depth map. BTW I'm not saying there is NO use for it just not in this particular application.

Keeping in mind resolution is key when simulating a bokeh effect because a real cameras focal range is perfect it can't miss an object in that focal range and put it out of focus. What Apple does to create a depth map is capture with 2 cameras simultaneously use the subject as the point of reference and compare the images for see the parallax shift between pixels/points. Accuracy for distance isn't the best at 1 meter per pixel/point however for a simulated bokeh effect thats just fine. And the beauty of the bokeh effect is the blur hides errors.

This is a photo of me represented by an Apple depth map and then processed from the background which did better that I anticipated.


View attachment 948748View attachment 948747


I intentionally used low lightning (that fog) to show some of the flaws which is the fuzziness on my arms. The actual image my arms look like they are covered in dirt and the thing on my left shoulder (your right) is a coffee maker behind me and its in focus on the image because it was assumed to be part of me or my clothing.

There is clearly room for improvement especially knowing that Google devices has produced amazing results with a single lens. Plus Apples machine learning is still in its infancy too.

Again the complaints I see from people are based photos they see that are produced by a professional/hobbyist (ie someone skilled, talented and/or educated in photography) using a DSLR w/ lens, lighting and accessories that are 2-10x as much as the iPhone spending half the day capturing the image and the other half editing it. Considering the results the iPhone can produce by otherwise simulating reality in 15-30 seconds is just impressive, at least to me it is. Portrait mode makes me look good and thats a god damn miracle in and of itself!

To offer some tips for portrait mode to anyone that cares. Make sure there isn't a light directly behind your subjects head, this will typically draw attention to oddities in those fine details. I see this a lot in public but don't bother using portrait mode with the subject backed up to something like a wall, that negates the bokeh effect. Keep in mind Apple is simulating depth of field, your subject should be in the foreground, unlike a shallow depth of field on a DSLR something in the foreground can become the focal point in portrait mode. When you get a chance to review your photos (needs to be less than 30 days before the metadata is deleted) if you see artifacts from the bokeh effect switch the photo lighting to 'stage light' and reduce f-stop until its better then switch lighting to 'high key light mono' and check and adjust it then switch it back to your preferred lighting. Its better to minimize the bokeh effect if it improves the overall quality. Use the lighting effect to soften peoples skin, adjust it up and down and you'll see what I mean. Not particular to portrait mode but have good lighting the harder it is for you to distinguish foreground and background the harder it is for the iPhone. Dont let the subject stand directly under a light where their facial features are casting shadows onto their face. If its sunny put the subjects back at an off angle to the sun this way the harsh light isn't in there face and the sun isn't under exposing the subject because its in frame. Edit your photos on a Mac/PC, gimp is a free and open source image editor available for PC/Mac/Linux that can do 90% of the stuff Photoshop can for the casual user.
Thats a lot of talking and, sorry, some parts totally not true.

speaking of bokeh you don’t have to spend 2-10 times the money or half a day on a traditional camera!

grab a Sony Nex body for $250 and a good old mechanical 50mm optic plus adapter for max $100 and thats all you need. No special light or software.

once you know how to operate the lense you’ll generate within seconds amazing portraits with bokeh thats unreached by Apple.

we can only hope that Lidar will help cos the last 3 years i didn’t see improvement
 
Last edited:
I think we should talk about the bokeh effect and how Apple is producing it first. There is definitely room for improvement however photography is an art form and the camera is just the instrument. People often complain about certain aspects of a camera without any self reflection or attempts to improve their own skillset. Literally complaining the camera is crap because of its can't compensate for their inability to use it. Apple is around half of billion dollars invested into machine learning due to the demand of peoples unwillingness to practice human learning. I digress.

The bokeh effect is the blur of the foreground and background due to a shallow depth of field produced by a camera lens by adjusting zoom and aperture. A subject in that narrow focal area is highlighted due to the contrast in focus. The results are appealing for some reason. Everything in the depth of field is in focus so there are no weird hair artifacts, no pieces of background in focus, no pieces of the focal area out of focus, its otherwise perfect.

Apple takes a bunch of existing technologies and adds their own spices to simulate the effect which is infinitely more difficult. Apple uses iPhones with 2 cameras and/or TrueDepth camera to create a depth map, the subject is identified and segmented into the sum of their parts...

View attachment 948732

Machine learning models segment the subject because portrait mode adds a realistic lighting effect. Light interacts differently hair, teeth, skin etc and the color of those things also changes how light appears. This is the part that impresses me the most about portrait mode because it looks so good. The ability to simulate light is a game changer in my opinion and capturing light is the fundamental aspect of photography.

The problem is where we are setting the bar since these are photos. If we expect Apple to simulate reality better then actual reality then that is not only unrealistic its unreasonable. An iPhone won't be able to fake reality as well as a DSLR can just show it.

I actually think Apple is on the right track and LIDAR is not only not necessary but its a step backward.

Actual calculated distance isn't a very useful metric. Apples depth map needs depth between points at the highest resolution possible to capture detail. It doesn't matter if a tree in the background is 10 meters or 100 meters away as long as the camera knows its out of the depth of field.

As LIDAR leave a single point (smartphone) to cover the photos field of view it would only be a couple feet before the points (laser matrix) has gap between them large enough to miss useful data. Fine detail like hair would be mostly lost if there was no other source for the depth map. BTW I'm not saying there is NO use for it just not in this particular application.

Keeping in mind resolution is key when simulating a bokeh effect because a real cameras focal range is perfect it can't miss an object in that focal range and put it out of focus. What Apple does to create a depth map is capture with 2 cameras simultaneously use the subject as the point of reference and compare the images for see the parallax shift between pixels/points. Accuracy for distance isn't the best at 1 meter per pixel/point however for a simulated bokeh effect thats just fine. And the beauty of the bokeh effect is the blur hides errors.

This is a photo of me represented by an Apple depth map and then processed from the background which did better that I anticipated.


View attachment 948748View attachment 948747


I intentionally used low lightning (that fog) to show some of the flaws which is the fuzziness on my arms. The actual image my arms look like they are covered in dirt and the thing on my left shoulder (your right) is a coffee maker behind me and its in focus on the image because it was assumed to be part of me or my clothing.

There is clearly room for improvement especially knowing that Google devices has produced amazing results with a single lens. Plus Apples machine learning is still in its infancy too.

Again the complaints I see from people are based photos they see that are produced by a professional/hobbyist (ie someone skilled, talented and/or educated in photography) using a DSLR w/ lens, lighting and accessories that are 2-10x as much as the iPhone spending half the day capturing the image and the other half editing it. Considering the results the iPhone can produce by otherwise simulating reality in 15-30 seconds is just impressive, at least to me it is. Portrait mode makes me look good and thats a god damn miracle in and of itself!

To offer some tips for portrait mode to anyone that cares. Make sure there isn't a light directly behind your subjects head, this will typically draw attention to oddities in those fine details. I see this a lot in public but don't bother using portrait mode with the subject backed up to something like a wall, that negates the bokeh effect. Keep in mind Apple is simulating depth of field, your subject should be in the foreground, unlike a shallow depth of field on a DSLR something in the foreground can become the focal point in portrait mode. When you get a chance to review your photos (needs to be less than 30 days before the metadata is deleted) if you see artifacts from the bokeh effect switch the photo lighting to 'stage light' and reduce f-stop until its better then switch lighting to 'high key light mono' and check and adjust it then switch it back to your preferred lighting. Its better to minimize the bokeh effect if it improves the overall quality. Use the lighting effect to soften peoples skin, adjust it up and down and you'll see what I mean. Not particular to portrait mode but have good lighting the harder it is for you to distinguish foreground and background the harder it is for the iPhone. Dont let the subject stand directly under a light where their facial features are casting shadows onto their face. If its sunny put the subjects back at an off angle to the sun this way the harsh light isn't in there face and the sun isn't under exposing the subject because its in frame. Edit your photos on a Mac/PC, gimp is a free and open source image editor available for PC/Mac/Linux that can do 90% of the stuff Photoshop can for the casual user.
Thank you so much for the detailed explanation.

You make an excellent point about how it’s just a tool and the results in each image are ultimately up to the photographer to enjoy. Perhaps my hope and ambitions for iPhone cameras are higher than average Joe’s. I’ve been shooting with 85mm f/1.4 IS and 24-70 f/2.8 IS lenses on a full frame camera for a long time, and portraits happen to be my favorite style. It’s probably fair to say I have a far more discerning eye for accurate separation of subject from the background, and how much of the subject remains in focus at a given distance, aperture and foal length. With that said, Apple, Google and the companies they purchased, are doing minD blowing amazing work on computational photography. Definitely not taking any of it for granted.

Based on what you’re saying, sounds like Apple should keep building on what they have in place. Maybe a submillimeter precision lidar sensor could help improve the existing image capture and creation process, but we shouldn’t count on the current iPad style sensors on being of much use in photo composition.
 
Thats a lot of talking and, sorry, some parts totally not true.

speaking of bokeh you don’t have to spend 2-10 times the money or half a day on a traditional camera!

grab a Sony Nex body for $250 and a good old mechanical 50mm optic plus adapter for max $100 and thats all you need. No special light or software.

once you know how to operate the lense you’ll generate within seconds amazing portraits with bokeh thats unreached by Apple.

we can only hope that Lidar will help cos the last 3 years i didn’t see improvement
I believe that's the part where most people cannot/don't want/don't need to be bothered with, for good reason. A lay person may not need to know things about aperture/focal length. For casual photos, they just need to know to set their camera app to portrait mode and take a shot. Surely an fake bokeh cannot be as good as natural ones, but I don't think Apple intend for portrait mode to replace DSLRs with large sensors and fast lenses.

Imo I don't think the addition of Lidar will be significant for portrait mode. It's only to carry the consistent move towards AR.
 
LIDAR is just the beginning of the "many eyed" phones we'll be seeing emerging in the next decade.
The one I've been working on for a few years now is miniaturised Near InfraRed Spectroscopy. (see AMS and Texas Instruments)
There is also UltraViolet Spectroscopy too (handy for measuring caffeine in liquids).
The applications of both are very wide and profound.
Imagine your phone becoming something akin to the Star Trek Tricorder which allows you to scan and analyse, using AI, medication (pills, are they what you think they are and genuine? Are they past their best before date?), food (% cocoa solids in your chocolate), alpha & beta in hops, thc & cbd in weed, oil in your car needs changing?, etc, etc.
 
we can only hope that Lidar will help cos the last 3 years i didn’t see improvement

When Apple introduced the Portrait mode with iPhone 7 Plus it looked okay. Convincing, but not near as good as what it is today. Starting with the Xs series, they added adjustable depth of field which only increased the quality and if you crank it up all the way, you will get what looks like swirly bokeh, where it looks like the background has a slight fish eye look. This is what some real lenses produce and I personally find it a very unique and cool look.

Not to speak of the bokeh itself, almost any other manufacturer just adds a big gaussian blur and a few bokeh balls and calls that a portrait shot. Not Apple, they use the closest simulation to lens blur that they could. Same with what you can do in Photoshop >> Lens blur. The depth map only helps to gradually increase or decrease the blur where necessary.
The edge detection can be improved by a lot though, I hope they utilize the LiDAR exactly for that. They will most likely not use it standalone, but in combination with already existing cameras and software to estimate the dept has close as possible.
 
  • Like
Reactions: zubikov
When Apple introduced the Portrait mode with iPhone 7 Plus it looked okay. Convincing, but not near as good as what it is today. Starting with the Xs series, they added adjustable depth of field which only increased the quality and if you crank it up all the way, you will get what looks like swirly bokeh, where it looks like the background has a slight fish eye look. This is what some real lenses produce and I personally find it a very unique and cool look.

Not to speak of the bokeh itself, almost any other manufacturer just adds a big gaussian blur and a few bokeh balls and calls that a portrait shot. Not Apple, they use the closest simulation to lens blur that they could. Same with what you can do in Photoshop >> Lens blur. The depth map only helps to gradually increase or decrease the blur where necessary.
The edge detection can be improved by a lot though, I hope they utilize the LiDAR exactly for that. They will most likely not use it standalone, but in combination with already existing cameras and software to estimate the dept has close as possible.


Yeah that would make sense that if LIDAR was used, it would be in an additive way. They bought an impressive bundle of companies in the last 5 years to make this effect better and better: LinX, PrimeSense, Invisage, Spektral, machine learning COs, etc. Each time we saw tiny hints of these technologies make their way into the camera it was a significant leap in focusing, image composition, subject isolation and post-capture features.
 
I guess we were onto something here. Lidar ended up being used to improve autofocus on the 12 pros. Looking forward to real life stills comparing the older AF methods vs a Lidar-assisted one.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.