Hi all,
Thought I'd ask here as I often come across some seemingly knowledgeable posters.
With the recent leaks for the 17 line's migration of the flash and LiDAR scanners, I have frequently seen people mention that this design would be unlikely because it would cause parallax issues with LiDAR data — which is a reasonable opinion given that this is a documented issue with larger LiDAR / Camera implementations like self driving cars.
However I would like to ask if anyone has any info or can explain why the readouts wouldn't be able to be compensated for computationally in an iPhone? (There doesn't seem to be anything as "mission critical" in the use case for LiDAR use of a phone so could apple be willing to think differently here? sorry, I couldnt resist)
Just to clarify I am not saying that "it won't work because of LiDAR issues" is wrong, I would just like to know if hypothetically there could be some mitigation - could any of that A-series / ML horsepower be put to work on the problems, and if Apple wanted to do it how could they?
I'm very interested in the opinions of those who know more on the subject than me. Can anyone share their thoughts?
Thought I'd ask here as I often come across some seemingly knowledgeable posters.
/article-new/2025/02/iphone-17-lineup-cad-render-majin-bu.jpg)

With the recent leaks for the 17 line's migration of the flash and LiDAR scanners, I have frequently seen people mention that this design would be unlikely because it would cause parallax issues with LiDAR data — which is a reasonable opinion given that this is a documented issue with larger LiDAR / Camera implementations like self driving cars.
However I would like to ask if anyone has any info or can explain why the readouts wouldn't be able to be compensated for computationally in an iPhone? (There doesn't seem to be anything as "mission critical" in the use case for LiDAR use of a phone so could apple be willing to think differently here? sorry, I couldnt resist)
Just to clarify I am not saying that "it won't work because of LiDAR issues" is wrong, I would just like to know if hypothetically there could be some mitigation - could any of that A-series / ML horsepower be put to work on the problems, and if Apple wanted to do it how could they?
I'm very interested in the opinions of those who know more on the subject than me. Can anyone share their thoughts?
On the flash issues, I'll admit that I don't care as much as I never like to rely on phone lighting and either like to shoot ambient or use external lighting where I'm able to but am still open to your thoughts on that too.