Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
If it tries to look too cool it probably isn’t, just like someone who wears sunglasses at night or an Hawaiian shirt in the winter. Yes, make fun of it.

Sincere question. Why make fun of people for what they chose as their own style? Aren't we trying to get past all that?

If something doesn't impact me materially I tend to just ignore it.

And the reality is, as far as this tech goes, until it's as invisible as a contact someone is going to find it dorky. shrugs.
 
If it tries to look too cool it probably isn’t, just like someone who wears sunglasses at night or an Hawaiian shirt in the winter. Yes, make fun of it.
One thing they did right, no camera. Unfortunately, with so many of these type devices HAVING cameras, people are going to think they’re being recorded (especially when folks using devices that are supposed to show a red light when recording are defeating the red light).
 
if I dress like a dork and someone makes fun of me I’ll take it like a man instead of crying.

There you go again with labels, I keep asking why is this so important? I don't care enough one way or the other what people think to 'take it' or 'not'. Certainly not worth crying. Just seems pointless overall. Especially when it comes to a tool. Do people make fun of people with hammers? I mean, hammers are pretty dorky I guess.

These are all just tools. Not fashion statements.
 
  • Like
Reactions: mrBeach
Sorry. are you judging this product based on another product's weaknesses? Hmmmm... now that I think about it, that Big Mac I had really did suck because the Burger King Whopper does! 🤣
except their fov in their specs is less than the xreal air 2 which is still too small of an fov for me, you know, my primary complaint

imagine me saying the MacBook Air 11" being too small and then here you come along saying "how would you know if the 9" windows laptop is too small if you've never tried it!!"

nice strawman argument though 🤣🤣
 
Last edited:
  • Like
Reactions: DailySlow
23m pixels vs 4m - the field of view would have to be absolutely tiny!
The FOV is much smaller. But pixel counts aren't directly comparable. On the Vision Pro, not all pixels are directly visible. You can't view the corners of the physical display. The edges of the display aren't directly viewable—counterintuitively, you actually have to look away from the edge to see more of the edge of the screen.
 
  • Like
Reactions: DailySlow
The number of pixels isn't directly comparable.

With AR glasses of this style, all the pixels are in view at once, and in the default mode the software pixels are mapped one to one to the physical pixels. The viewable FOV is rectangular, and matches up quite closely with the maximum FOV recommended for a single display (You typically wouldn’t be close enough to a standard 16:9 flatscreen that it would take up more than 45° of your vision).
[...]
But the big issue with these AR glasses is that to maximize their usefulness, they need to attach the screen to your head instead of the environment, which is less comfortable. You can’t rotate your head to look at the corners of the screen, you have to rotate your eyes.
Interesting, thanks for the info.

This does seem like a reasonable way to build something usable given the limitations of its level of technology. But the usefulness is likely to be extremely marginal for most people, at best, while the tech level has been far surpassed. The AVP, while itself full of compromises, seems like it's more likely to be generally useful to a far larger group.
 
But the big issue with these AR glasses is that to maximize their usefulness, they need to attach the screen to your head instead of the environment, which is less comfortable. You can’t rotate your head to look at the corners of the screen, you have to rotate your eyes.
This is not correct (I think). For example, if you have a 3 monitor setup in your XR Glasses, you pan your head to look at the three monitors which are stacked how you like them. At least, it works that way in the Viture Pro. I saw enough reviews before I plopped down the $459 to these to go along with my M1 Max MPB. Trying to get a virtual three monitor setup on the "go." Strangely, this is the major use-case for buying a AVP at 8 times the cost.
 
This is not correct (I think). For example, if you have a 3 monitor setup in your XR Glasses, you pan your head to look at the three monitors which are stacked how you like them. At least, it works that way in the Viture Pro. I saw enough reviews before I plopped down the $459 to these to go along with my M1 Max MPB. Trying to get a virtual three monitor setup on the "go." Strangely, this is the major use-case for buying a AVP at 8 times the cost.
My next paragraph clarifies that. Some AR glasses default to a screen that moves with your head, but it looks like this one defaults to virtual screens that are semi-attached to the environment. I'm guessing that it will also have a mode that just shows a screen that is fixed to your head position.
 
My next paragraph clarifies that. Some AR glasses default to a screen that moves with your head, but it looks like this one defaults to virtual screens that are semi-attached to the environment. I'm guessing that it will also have a mode that just shows a screen that is fixed to your head position.
Yes, it has a "Pin Screen" mode to keep the screen attached to the front if needed. You get to pick ;-)
 
I think this is what Apple will have to do if they ever want to make AR Glasses (and ive been waiting for them to do it since Google Glasses came out years ago) -They just need to use the iPhone's processor and wirelessly transmit the display to the glasses. They were trying to figure out how to put M chips into the glasses and this is wrong until they get down well below 1nm. Just use the iPhone for the hard work and wirelessly send the display to the glasses. They could probably have this running now with current technology. Especially since you see someone else already did it.
For AR glasses, I agree (and I would have loved that much simpler product!) Apple aimed for a wholly different feature set with Vision Pro, and I'm guessing the number of input sensors needed for gaze, gesture, and environmental tracking meant that transmitting all that data to a second device was too much latency and trouble.
 
The first time I used virtual desktop in 1997 I said thing this is ****. The last time I used virtual desktops in 2019 I said this thing is ****. Like our lords the cats I don’t want anything on me, except clothes.
 
I am definitely interested in this category of product. What I imagine is a spacial computing interface - an interface that presents the content processed on other devices in the space around you. It would do something similar to what Vision Pro does with floating windows in the space around you, and would work with your existing devices, except that the external PC (or other device) would always be the primary processing device (not the glasses), and the interface would only be dealing with spatial presentation of content from the source device. I'm imagining plugging a dongle into my work PC (windows) and expanding my multiple enormous spreadsheets and databases to incredibly large sizes in the space around me in 8k.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.