Ok. I truly appreciate the life energy you put into this reply. I’ll ignore the rant and umbrage because frankly you’re entitled to that based on my response to your comment. 🙏🏽
Here’s my rationale for viewing AVP as a paradigm shift:
- This AVP release is primarily about building Spatial Computing infrastructure/Platform (sensors, real/virtual world and application programming interfaces, spatial abstraction/computation/design/development/deployment/distribution/privacy frameworks, operating system, and other foundational elements) with secondary focus on largely proof of concept demos vs mature applications.
- All of this is being reduced to and compared with a single application of a spatial computing platform (AR/VR/MR) which misses the point of the platform.
- In other words, Spatial Computing is a higher layer of abstraction that encompasses AR/VR/MR and provides tooling to create spatially aware applications that go well beyond AR/VR/MR (imagine a remote collaboration session with a colleague to visually inspect and adjust a coolant regulating valve in a nuclear reactor — where remote camera feed + sensor data + digital twin simulation + embedded industrial controllers are brought together in an integrated computing environment in real time).
- Vision Pro is a platform that is intended to realize this abstracted spatial computing vision. This is fundamentally different and more ambitious vision than existing AR/VR/MR devices. And, no I’m not making this up — invest the time to go to Apple Developer portal and watch the Vision Pro presentations from Apple Engineers and you can see the entirety of the platform and imagine this and other possibilities for yourself.
- Vision Pro Version 1 is primarily focused on foundational technology frameworks and less so on applications — hence the lack of sizzle which will come later as a result of reduced app dev friction enabled by a well though-out platform.
- Net-net: the paradigm shift is from AR/VR/MR appliance to generalized Spatial Computing platform (appliance + frameworks). Obviously, some of this is my opinion, but it is grounded in what Apple says about the platform they are building — because *they* are indeed the authority on what *they* are building.
I also very much appreciate your detailed response and I genuinely apologize for my ranting and umbrage. Ultimately I really enjoy reading different takes on here because I've had my opinion changed a lot based on MR posters alone. 💚
I'm glad you posted what you did because you've done a fine job of summarizing my hopes and wishes for the Vision platform. I have made similar posts in defense of the future paradigm. I think we're very much aligned on the potential for what Apple calls "Spatial Computing" but my specific issues are with Apple's Spatial Computing as it exists today with AVP Gen 1. In short the first stepping stone is not enough for me to declare the paradigm shift.
I still take issue with your last bullet point though because the only authority over whether or not a new paradigm has been reached is reality (loosely defined, what I mean by "reality" in this specific case is the general public's relationship with technology). The smartphone did this. Meta claims they are ushering in the age of the "metaverse" which if their vision is fully realized might be a paradigm shift... but nobody's calling it that right now because they haven't done it yet. Their vision is still just a vision. That's how I feel about Apple's "Spatial Computing" right now, it's a vision that's not fully realized no matter how much engineering time and effort has been put into laying the groundwork. I guarantee you Apple believe the same, they know that we know their dream product is the glasses version. It's not an issue of AVP Gen 1 cost preventing the new era either, even if AVP Gen 1 were on sale for $100 I think the hardware and software is not at the level necessary for a Spatial Computing revolution.
Allow me to elaborate. My personal definition of "Spatial Computing" is:
- A form of computing that truly merges the digital world with the physical world such that the lines are blurred. I think this can be 100% realized with a pair of see through glasses which we won't see for decades unless there's a yet to be achieved technological breakthrough (the opinion of those in the AR industry). We can get closer if Apple improves the issues with AVP Gen 1 that put that device specifically in the same camp as other VR/AR/MR headsets to me, namely: glare, FOV, weight. Once Apple achieves the Glasses product I don't think we'll see the end of the Vision Pro product line, it will remain as the option that lets you fully immerse in an environment because a glasses design can't achieve this.
- Spatial Computing must also be ubiquitous, I must be able to 'pop in' to the digital realm with zero friction, similar to taking an iPhone out of my pocket wherever I am. However, iPhone has the drawback of everything being contained to the device and it cannot seamlessly interact with the real world, so that is not a truly frictionless digital experience for me. Example: real time insights and overlays based on what I'm looking at. The Vision platform will be able to do this eventually, maybe even AVP Gen 1 with some software updates, but the iPhone can never do it realistically. AVP gives us a hint at that with the "look up to drop down" menu used to launch the app springboard, control environment settings, and open control center. But that's just software. The hardware must also be frictionless, hence:
- The hardware must reach a minimum threshold of invisibility. I should almost forget I'm wearing it. I mentioned this in point 1 but this is very important for me. visionOS 1.0 is better than any other VR/AR OS I've used by an order of magnitude but the hardware holds it back a lot. I always feel like I'm wearing a VR headset with AVP and that significantly blunts the experience for me. Whatever the screens did to my eyes concerned me enough to swear I'm never wearing them again until they have zero effect on my eyesight (I made a thread about this).
Until the Vision platform sheds the glaring (pun intended) limitations of other VR headsets, it's not a paradigm shift for me because my relationship with the AVP is not dramatically different from other headsets I've used and my relationship with technology in general has not yet changed because of it. It is markedly better than everything else and I thought it was more enjoyable and useful than anything else I've used in the segment. I've been excited about Apple's first discrete AR product for YEARS because I knew they would come out swinging.
Right now based on my experience it's just a better version of what's already out there. A much better version with the groundwork set for something more substantial down the line, but going from a 720p LCD screen to an 8k OLED panel is not fundamentally changing how I interact with my Mac for example. Now going from a Mac to an iPhone is a different story because the iPhone as a technology has changed my relationship with the real world and the digital world. I cannot imagine going back to a world without smartphones, nobody can, which is why I think it's easier to agree that's a paradigm shift. As of today with Gen 1 AVP and visionOS 1.0 I can take it or leave it and enough people seem to agree. I can't in good conscious call this a paradigm shift... yet. I can only call it once we reach it. The promise of what's to come is not enough. I know people worked their behinds off to make this first product as good as it can possibly be with the current technological limitations but it's not enough for me. I made this mistake already after last year's WWDC when my expectations for Gen 1 fell short of what I actually used when I finally tried it for myself. I imagined myself using it all day, every day for all kinds of productivity and media consumption tasks but that didn't happen.
I hope that sheds light on why I called Spatial Computing a marketing stunt. As long as my relationship to Vision Pro is the same as other VR headsets, I'll call it glossy marketing. I wish it weren't the case and I believe it won't always be, but I'm done believing the hype until the reality of a product changes my life substantially.
Some of my benchmarks are:
- Can I go grocery shopping with both hands free wearing a pair of Vision Glasses (or whatever they'll be called) so that I can:
- Interact with my grocery list in a floating window and not worry about getting my iPhone dirty because I'm sampling fruits at the market like what happens right now.
- AR overlays to help me shop: Directions to the right aisle for my next ingredient in my list, personalized nutritional information when I take a look at a product (tell me when a product contains something my brother might be allergic to), quick info about the company behind a product, where to buy more in bulk online, etc.
- It goes without saying but I can't look like an idiot with ski goggles and cable running down my jacket, nor can I interact with people via a fake pair of eyes. It's antisocial, which is more concerning than looking like an idiot honestly.
- Can I quickly interact with digital devices in my home without opening a windowed app? It should feel like telepathy and be easier than flipping a light switch in a wall. Again this falls under the category of 'ubiquitous' computing
- One tap toggling of HomeKit lights just by looking at them
- Turn on my Mac's screensaver even when it's on the other side of the room
- Get sprinkler schedule status when I look at my garden hose (lol)
- Is the device capable of rendering retina level resolution content? I can't easily make out pixels on AVP but plenty of complicated geometry still looks pixelated due to antialiasing of text, certain UI elements, etc.
- The device must have ZERO consequences to eye or brain health. I got dizzy driving right after I took AVP off.
- Can I input text as quickly (or quicker) than I can on iPhone?
Sorry again for the long post but you can tell I'm basically more excited about this technology's potential than basically anything else.
Last edited: