Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I really had to laugh with your post!

This is exactly what people were saying when the rumour came out Apple was going to make a MP3-player.
"What??? But we already have them! Nothing new! What the heck do they want to improve on that? Hahaha, Apple is crazy! And look how expensive! Typically Apple! Well, not me, I will not buy it!" - the rest is history.

This is exactly what people were saying when the rumour came out Apple was going to make a cell phone.
"What??? But we already have them! Nothing new! What the heck do they want to improve on that? Hahaha, Apple is crazy! And look how expensive! Typically Apple! Well, not me, I will not buy it!" - the rest is history.

.....
The above gets used a lot as a kind of justification why people's views and opinions on the VP are wrong. They are totally wrong in this case because whilst there were other iterations of mp3 players and mobile phones, no one had done them the way Apple did BUT with the VP they have not done anything different to what is already out there bar from using better hardware for improved optics. The hand gestures is not something new because it has been thought of before but not implemented because it was thought users arms would tire very quickly having to have them out in front of them so headset camera's can see them. Apple has implemented and what was one of the critics of the VP? yep, using hand gestures or having to have the arms up and in front of them. It was quickly pointed out that people would tire using the onscreen keyboard.

As I said in my post, Apple has not brought something new to the VR table. All they have done is brought a better technical spec, one with far better optics.
 
  • Like
Reactions: gusmula
I do understand what you are saying, and I kind of agree.
But then again, the first, original, iPhone had a mediocre camera, did not have copy-paste!, and so on.
It slowly evolved into what we have nowadays. Hardly anyone can live without a phone anymore.
That will most probably not the case with the APV, however, for a very first model, I think it is pretty well done.
One can only expect it to get better from here on.

Anyway, time will tell.
 
Using this for watching movies (among other uses) sounds amazing since you can make the apparent screen far larger than I could ever fit in my media room. However, the audio system in my media room flips that equation in that it’s light years beyond the AVP‘s built-in speakers or AirPods Pro. So my question is, given I have an Apple TV, can I watch the visual portion of a movie on the AVP while I listen to the audio portion in full-quality on my surround system? Even an indirect way would be good as long as it works well, like maybe using SharePlay with the Apple TV and muting the AVP side.
 
Can someone (i.e. me) who is functionally blind in one eye, and has 20/20 corrected vision in the good eye experience any or all the features offered by Apple Vision Pro? I did a search a few years ago (before Apple Vision Pro was public knowledge) and there were some studies done with other makes/brands of headsets that demonstrated they were able to accommodate people blind in one eye, to a certain extent.

Billy, I’m in that same exact boat as you, although in my case I no longer have a right eye (it needed to be removed 6 yards ago but there’s a prosthetic eye in the socket).

I haven’t tried the VP yet either, but I was reading there’s an Accessibility setting that allows you to select a single eye to control navigation, so I was very happy to read that.

• What I’m especially interested in knowing is related to depth perception and field of view:

• What is the field of view of the cameras (how many arc degrees does each camera span)?

• Is there an Accessibility setting that causes the VP to auto-magically combine the input of both cameras and project the full field of view to one eye? If so this would be AMAZING because it would restore peripheral vision for people with one eye! Now, I’m assuming, everything in the projected scene would appear approximately “half sized” since you’d be experiencing the full binocular field of view with one eye. That might actually turn out to be essentially functionally useless, offering no benefit and in fact hindering the experience. However, if Apple does support this, I’m sure they’ve worked some kind of magic to make the experience functionally useful.

• If Apple doesn’t support the above, I would love if they attempted to get it to “just work”. Or, does anyone know if the developer APIs would allow me to attempt this technological feat myself?

• Exactly how does Apple render the pinned 3D widgets into the scene such that their 3D nature coincides with the way the brain builds a 3D image using the input from both eyes? In other words, on the one hand the native 3D scene, absent any widgets, is “rendered” by the brain. On the other hand, the 3D widgets are an artificially projected overlay that requires a mathematical rendering that not only needs to satisfy correspondence with the external 3D space, but it must also be “re-tendered” by the brain such that the brain views the 3D widgets with the proper depth perception, parallax, etc.

• Perhaps people are having trouble understanding what I mean, so I’ll expand a bit further… Apple needs to position & render the 3D widgets into a 3D scene that’s built on the fly using the input from both cameras, which then gets projected by two separate 2D screens in a way that the brain “re-renders” everything wherein Apple’s placement of the 3D widgets remains consistent. Apple does not have access to anyone’s brain. :).

• Therefore I’m wondering how it’s even possible for Apple to get this to work in the first place for people with *two eyes? Does anyone get what I’m saying? :). Furthermore, since it does obviously work, it seems to me it should be possible for Apple to render the scene for “One-eye Mode” in such a way that a person with one eye has their depth perception and parallax “effectively fully restored” despite the fact they’re viewing with only one eye. This is simply because it should be possible to create on-the-fly a simulated 3D experience that “just works” for people with one eye?

I’m no 3D graphics developer, and I’m no brain expert, so maybe what I’m describing is simply impossible — I also have zero experience with 3D googles. I do understand in general optics & furthermore in general how the brain sees in 3D, so I’m thinking there’s a way to fake it & make it.
 
Billy, I’m in that same exact boat as you, although in my case I no longer have a right eye (it needed to be removed 6 yards ago but there’s a prosthetic eye in the socket).

I haven’t tried the VP yet either, but I was reading there’s an Accessibility setting that allows you to select a single eye to control navigation, so I was very happy to read that.

• What I’m especially interested in knowing is related to depth perception and field of view:

• What is the field of view of the cameras (how many arc degrees does each camera span)?

• Is there an Accessibility setting that causes the VP to auto-magically combine the input of both cameras and project the full field of view to one eye? If so this would be AMAZING because it would restore peripheral vision for people with one eye! Now, I’m assuming, everything in the projected scene would appear approximately “half sized” since you’d be experiencing the full binocular field of view with one eye. That might actually turn out to be essentially functionally useless, offering no benefit and in fact hindering the experience. However, if Apple does support this, I’m sure they’ve worked some kind of magic to make the experience functionally useful.

• If Apple doesn’t support the above, I would love if they attempted to get it to “just work”. Or, does anyone know if the developer APIs would allow me to attempt this technological feat myself?

• Exactly how does Apple render the pinned 3D widgets into the scene such that their 3D nature coincides with the way the brain builds a 3D image using the input from both eyes? In other words, on the one hand the native 3D scene, absent any widgets, is “rendered” by the brain. On the other hand, the 3D widgets are an artificially projected overlay that requires a mathematical rendering that not only needs to satisfy correspondence with the external 3D space, but it must also be “re-tendered” by the brain such that the brain views the 3D widgets with the proper depth perception, parallax, etc.

• Perhaps people are having trouble understanding what I mean, so I’ll expand a bit further… Apple needs to position & render the 3D widgets into a 3D scene that’s built on the fly using the input from both cameras, which then gets projected by two separate 2D screens in a way that the brain “re-renders” everything wherein Apple’s placement of the 3D widgets remains consistent. Apple does not have access to anyone’s brain. :).

• Therefore I’m wondering how it’s even possible for Apple to get this to work in the first place for people with *two eyes? Does anyone get what I’m saying? :). Furthermore, since it does obviously work, it seems to me it should be possible for Apple to render the scene for “One-eye Mode” in such a way that a person with one eye has their depth perception and parallax “effectively fully restored” despite the fact they’re viewing with only one eye. This is simply because it should be possible to create on-the-fly a simulated 3D experience that “just works” for people with one eye?

I’m no 3D graphics developer, and I’m no brain expert, so maybe what I’m describing is simply impossible — I also have zero experience with 3D googles. I do understand in general optics & furthermore in general how the brain sees in 3D, so I’m thinking there’s a way to fake it & make it.
Great comments, and you pose some very good questions. I wish I had bookmarked those headset studies I came across a few years ago. If I recall correctly, the reviewers were able to demonstrate that the headsets they tested could approximate a 3D experience for a person with one eye. I also noticed some comments over the past few weeks of two-eyed VisionPro users who were experiencing some depth perception issues while using the headset. Such as, taking a drink from a glass with a straw in it, which ended up one’s nose, making a mess while eating something, and running into furniture, etc, while walking around. I’m sure a lot of this corrects itself over time with use of the headset. I’m still getting used to navigating with just one functioning eye. The first 6 months was the worst. Pouring something in a glass and missing the glass. Setting something on a counter top or a table, miss, and have it crash to the floor. Driving has been OK, with some changes. With no depth perception, your brain has to learn a different way to judge when to put on the breaks. It’s also much more difficult to judge your speed, which means paying a lot more attention to your speedometer readings. My most memorable experience was when I was standing in the bed of my pickup truck rearranging some cargo, and when finished just jumped from the tailgate onto the ground like I had done many times in the past. As soon as I was airborne I knew my mistake. I couldn’t tell when to flex my knees for a landing. Luckily, I dropped and rolled as soon as I made contact. Anyway, hopefully in the coming weeks and months we will learn more about new, unthought of, potential applications for Apple’s VisionPro. And, I am going to sign up for VisionPro demo session, just to “see” for myself.
 
  • Like
Reactions: hyperbolic
Great comments, and you pose some very good questions. I wish I had bookmarked those headset studies I came across a few years ago. If I recall correctly, the reviewers were able to demonstrate that the headsets they tested could approximate a 3D experience for a person with one eye. I also noticed some comments over the past few weeks of two-eyed VisionPro users who were experiencing some depth perception issues while using the headset. Such as, taking a drink from a glass with a straw in it, which ended up one’s nose, making a mess while eating something, and running into furniture, etc, while walking around. I’m sure a lot of this corrects itself over time with use of the headset. I’m still getting used to navigating with just one functioning eye. The first 6 months was the worst. Pouring something in a glass and missing the glass. Setting something on a counter top or a table, miss, and have it crash to the floor. Driving has been OK, with some changes. With no depth perception, your brain has to learn a different way to judge when to put on the breaks. It’s also much more difficult to judge your speed, which means paying a lot more attention to your speedometer readings. My most memorable experience was when I was standing in the bed of my pickup truck rearranging some cargo, and when finished just jumped from the tailgate onto the ground like I had done many times in the past. As soon as I was airborne I knew my mistake. I couldn’t tell when to flex my knees for a landing. Luckily, I dropped and rolled as soon as I made contact. Anyway, hopefully in the coming weeks and months we will learn more about new, unthought of, potential applications for Apple’s VisionPro. And, I am going to sign up for VisionPro demo session, just to “see” for myself.

Oh man, yeah I feel you bro. It really sucks losing an eye doesn’t it? That’s why I tell everyone, see an ophthalmologist twice a year. Not an optometrist, but ophthalmologist. Have them check for what’s called “severe lattice degeneration” if you’re nearsighted. You’ll be glad you did if they catch it early.

The loss of depth perception is difficult as well as not quite 1/2 the field of view (since your other eye is indeed covering a fair portion of the field of view of the other eye. Nonetheless I constantly bump into people who are to the right of me while walking. It’s almost resulted in a physical altercation a few times until you finally convince the person you only have one eye. Lol.

Loss of depth perception is difficult for the reasons you state. I’m also constantly knocking glasses over while eating because as you said you’re never quite sure how far your hand is away from the glass (or object). Threading a needle is basically impossible (not that I thread a lot of needles lol). Perhaps most difficult is simply losing the ability to experience the inherent beauty of life in 3D. One good example is, stand under a tree and look up. You lose all of the beautiful layers of depth of the majestic tree that everyone pretty much takes for granted (not in a bad way of course, it’s just that sometimes it’s difficult to truly appreciate something until you lose it, especially if you lose it permanently).

So that’s why I would be absolutely thrilled and thankful if Apple could effectively solve the “One Eye Problem” to give those who’ve lost an eye a realistic semblance of the peripheral vision and depth/parallax that they’ve lost. It would transform many lives, and countless people would be forever grateful.

I hope someone responds as to my inquiry regarding the Apple Vision Pro programming APIs, with respect to the level of control the developer has over controlling the physical cameras and rendering the scene that projects from the screens.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.