I think we should talk about the bokeh effect and how Apple is producing it first. There is definitely room for improvement however photography is an art form and the camera is just the instrument. People often complain about certain aspects of a camera without any self reflection or attempts to improve their own skillset. Literally complaining the camera is crap because of its can't compensate for their inability to use it. Apple is around half of billion dollars invested into machine learning due to the demand of peoples unwillingness to practice human learning. I digress.
The bokeh effect is the blur of the foreground and background due to a shallow depth of field produced by a camera lens by adjusting zoom and aperture. A subject in that narrow focal area is highlighted due to the contrast in focus. The results are appealing for some reason. Everything in the depth of field is in focus so there are no weird hair artifacts, no pieces of background in focus, no pieces of the focal area out of focus, its otherwise perfect.
Apple takes a bunch of existing technologies and adds their own spices to simulate the effect which is infinitely more difficult. Apple uses iPhones with 2 cameras and/or TrueDepth camera to create a depth map, the subject is identified and segmented into the sum of their parts...
View attachment 948732
Machine learning models segment the subject because portrait mode adds a realistic lighting effect. Light interacts differently hair, teeth, skin etc and the color of those things also changes how light appears. This is the part that impresses me the most about portrait mode because it looks so good. The ability to simulate light is a game changer in my opinion and capturing light is the fundamental aspect of photography.
The problem is where we are setting the bar since these are photos. If we expect Apple to simulate reality better then actual reality then that is not only unrealistic its unreasonable. An iPhone won't be able to fake reality as well as a DSLR can just show it.
I actually think Apple is on the right track and LIDAR is not only not necessary but its a step backward.
Actual calculated distance isn't a very useful metric. Apples depth map needs depth
between points at the highest resolution possible to capture detail. It doesn't matter if a tree in the background is 10 meters or 100 meters away as long as the camera knows its out of the depth of field.
As LIDAR leave a single point (smartphone) to cover the photos field of view it would only be a couple feet before the points (laser matrix) has gap between them large enough to miss useful data. Fine detail like hair would be mostly lost if there was no other source for the depth map. BTW I'm not saying there is NO use for it just not in this particular application.
Keeping in mind resolution is key when simulating a bokeh effect because a real cameras focal range is perfect it can't miss an object in that focal range and put it out of focus. What Apple does to create a depth map is capture with 2 cameras simultaneously use the subject as the point of reference and compare the images for see the parallax shift between pixels/points. Accuracy for distance isn't the best at 1 meter per pixel/point however for a simulated bokeh effect thats just fine. And the beauty of the bokeh effect is the blur hides errors.
This is a photo of me represented by an Apple depth map and then processed from the background which did better that I anticipated.
View attachment 948748View attachment 948747
I intentionally used low lightning (that fog) to show some of the flaws which is the fuzziness on my arms. The actual image my arms look like they are covered in dirt and the thing on my left shoulder (your right) is a coffee maker behind me and its in focus on the image because it was assumed to be part of me or my clothing.
There is clearly room for improvement especially knowing that Google devices has produced amazing results with a single lens. Plus Apples machine learning is still in its infancy too.
Again the complaints I see from people are based photos they see that are produced by a professional/hobbyist (ie someone skilled, talented and/or educated in photography) using a DSLR w/ lens, lighting and accessories that are 2-10x as much as the iPhone spending half the day capturing the image and the other half editing it. Considering the results the iPhone can produce by otherwise simulating reality in 15-30 seconds is just impressive, at least to me it is. Portrait mode makes me look good and thats a god damn miracle in and of itself!
To offer some tips for portrait mode to anyone that cares. Make sure there isn't a light directly behind your subjects head, this will typically draw attention to oddities in those fine details. I see this a lot in public but don't bother using portrait mode with the subject backed up to something like a wall, that negates the bokeh effect. Keep in mind Apple is simulating depth of field, your subject should be in the foreground, unlike a shallow depth of field on a DSLR something in the foreground can become the focal point in portrait mode. When you get a chance to review your photos (needs to be less than 30 days before the metadata is deleted) if you see artifacts from the bokeh effect switch the photo lighting to 'stage light' and reduce f-stop until its better then switch lighting to 'high key light mono' and check and adjust it then switch it back to your preferred lighting. Its better to minimize the bokeh effect if it improves the overall quality. Use the lighting effect to soften peoples skin, adjust it up and down and you'll see what I mean. Not particular to portrait mode but have good lighting the harder it is for you to distinguish foreground and background the harder it is for the iPhone. Dont let the subject stand directly under a light where their facial features are casting shadows onto their face. If its sunny put the subjects back at an off angle to the sun this way the harsh light isn't in there face and the sun isn't under exposing the subject because its in frame. Edit your photos on a Mac/PC,
gimp is a free and open source image editor available for PC/Mac/Linux that can do 90% of the stuff Photoshop can for the casual user.