Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Mousesuck

macrumors regular
Original poster
Jun 19, 2017
132
44
Last years I/O, google presented the project Soli.


An miniaturized radar, that feats on a smartwatch. With precision, Soli detects micro gestures of your fingers moving between each others or just moving in the air.

Then it translates them into an 1:1 manipulation of 3D virtual objects.

Although the current etineration of Apple Watch is still in Its infancy, I’m a big supporter of wearables and I see them as the future of Apple ecosystem.

Mainly I believe in the deep integration between Apple Watch and the future Augmented reality Apple Glasses. (Of course Airpods will also be important on this ecosystem).

As a concept, Apple Watch would
be your analytical body-tracking, wearable notification, quick text, calling, Apple Pay and Siri hub (to which you communicate using your airpods)

AW would exist as a physical object that leaves you continually connected to the other world, the unpalpable world, the augmented world that only pops into existence when you wear your Apple glasses, that are deeply integrated with AW.

You will take Apple glasses out of your shirt pocket when you want to do some real work, experience media like a movie, or type some document (since it can project a real keyboard on any surface), play, design, whatever you do right now on your iPhone/Ipad and can’t do on the AW.

I started talking about project soli because it seams like the perfect way to create a tasteful gesture base human interface.

At the same time, Apple Glasses could track AW with such a precision that they could augmented the AW, showing information that floats around the AW and surpaces the small screen realstate limitation of It, by showing some perpetual central hub, floating around your wrist, and from which you could select different option, definitions and interactions, while the content is presented to you anchored in real surfaces, floating or just projected in the real world as a virtual object.

It would be a very interesting symbiose between the real world and the virtual one.

What do you guys think?
 

Attachments

  • 46F771D8-9E91-4BCA-AB17-845F5DFB10C1.jpeg
    46F771D8-9E91-4BCA-AB17-845F5DFB10C1.jpeg
    55.4 KB · Views: 284
Last edited:

DeepIn2U

macrumors G5
May 30, 2002
13,047
6,983
Toronto, Ontario, Canada
Great technology that was inevitable to come to products.

The only problem with this is their showing us in a lab environment, as that's all they can for now. The problem I see ...

what people find natural when just talking with their hands and body language if they do/do not talk with their hands, can be mistakenly sensed as input UI. The other issue is across cultures what is seen as a Yes or No head nod in the western world does NOT necessarily mean that within say India. Those natively raised there a head nod we'd take as Yes/No is actually just an acknoledgement of hearing you and that's it. lol.

So the UI software would need to be catered for the region a product would be sold in, not unlike keyboard languages in our smartphones, tablets and computers. Unless of course we send a handbook of intructions of what gestures will work for UI input/manipulation: not unlike getting your first Palm Pilot lol.
 
  • Like
Reactions: BarracksSi

BarracksSi

Suspended
Jul 14, 2015
3,902
2,664
The last thing I want to do is to try to memorize a bunch of abstract hand gestures. I'm also loathe to have to put on special glasses any time I want to do some technology-related task (which means I'd have to figure out what to do with the regular glasses I wear all the time now).

No, I think the next big step would be an all-audio interface, much like in the movie Her. There was barely a smartphone-type object in that film -- it looked more like a small, palm-sized address book -- and the desktop computers didn't have hardware keyboards.

The director's purpose of designing such technology in the film was to remove it from the moviegoer's experience so they could focus on the Siri-like computer voice ("Her") much like the main character did. The notion of such technology if it were real suggests that the devices' computing power would be strong enough that we wouldn't have to use keyboards and touch inputs to translate our intentions into computer-ready data, essentially continuing the evolution from punch cards to QWERTY keyboards, then to mice and trackpads, and, lately, to multitouch screens.

I'd even suggest that smartwatches might be the first casualties of an all-audio computer interface. If you had an earpiece that could serve as an extension of your phone, giving you all the same notifications that you get on your wrist now, a smartwatch could easily be seen as a redundant device.

In a sentence: I think the future of personal technology would be less like Minority Report and more like Her.
 
  • Like
Reactions: dcpmark

dcpmark

macrumors 65816
Oct 20, 2009
1,029
815
I'd even suggest that smartwatches might be the first casualties of an all-audio computer interface. If you had an earpiece that could serve as an extension of your phone, giving you all the same notifications that you get on your wrist now, a smartwatch could easily be seen as a redundant device.

In a sentence: I think the future of personal technology would be less like Minority Report and more like Her.

Agreed, but we will still need displays from time to time to read things when necessary, and you can receive and process information much faster with your eyes than your ears. I was just thinking of a microprojector that you could wear over your ear that can display information directly on your cornea and provides an AR sense of seeing info but still being able to interact with your environment.
 
  • Like
Reactions: Mousesuck

Mousesuck

macrumors regular
Original poster
Jun 19, 2017
132
44
Agreed, but we will still need displays from time to time to read things when necessary, and you can receive and process information much faster with your eyes than your ears. I was just thinking of a microprojector that you could wear over your ear that can display information directly on your cornea and provides an AR sense of seeing info but still being able to interact with your environment.

Indeed, what you discribe are augmented reality glasses like the ones from magic leap, or at leats what their multibillion dolar company promises. Light projected in your retina, creating a light field.

 
  • Like
Reactions: dcpmark

Mousesuck

macrumors regular
Original poster
Jun 19, 2017
132
44
The last thing I want to do is to try to memorize a bunch of abstract hand gestures. I'm also loathe to have to put on special glasses any time I want to do some technology-related task (which means I'd have to figure out what to do with the regular glasses I wear all the time now).

No, I think the next big step would be an all-audio interface, much like in the movie Her. There was barely a smartphone-type object in that film -- it looked more like a small, palm-sized address book -- and the desktop computers didn't have hardware keyboards.

The director's purpose of designing such technology in the film was to remove it from the moviegoer's experience so they could focus on the Siri-like computer voice ("Her") much like the main character did. The notion of such technology if it were real suggests that the devices' computing power would be strong enough that we wouldn't have to use keyboards and touch inputs to translate our intentions into computer-ready data, essentially continuing the evolution from punch cards to QWERTY keyboards, then to mice and trackpads, and, lately, to multitouch screens.

I'd even suggest that smartwatches might be the first casualties of an all-audio computer interface. If you had an earpiece that could serve as an extension of your phone, giving you all the same notifications that you get on your wrist now, a smartwatch could easily be seen as a redundant device.

In a sentence: I think the future of personal technology would be less like Minority Report and more like Her.

Well, although this is still hypothetical, I would argue that the gestures in the video have nothing of abstract in them as they use the contact between fingers to give you the sense of force feedback and they are accurate representations of moving and rotating dials, using joysticks and so on...it is one of the most ingenious concepts I’ve seen in terms of gesture tracking since they aren’t asking you to frenetically wave your arms in the air, It is a very discreet interface.

The fact that you already have another pair of glasses (corrective glasses I suppose), isn't a problem, as AR glasses have clear lenses and this lenses could be graduated for your vision.
There is no reason for having two pairs of glasses and although the first etinerations will probably be a little bulky (don’t think a lot), nothing besides time impedes AR glasses of being undistinguishable from your normal ones, so eventually you can just use the AR ones all day.

I do agree with you, I believe the future are the wearables and eventually AI will be a big part of how we interact with computers, mainly the audio interface through Airpods.

This being said I completely disagree with your vision that an audio interface is going to be the main form of interaction.

I tell you this because we are really really close to minority report style of computing using glasses and we are infinitely farther in terms of audio interfaces HER style.

In the movie HER what you saw wasn’t the next iteneration of Siri (which of course will continue to evolve and get more and more useful) but what we call a strong super artificial general intelligence (SAGI).
It is general (AGI) because it operated with different types of information completing diferente types of tasks (like us humans) as opposed to narrow (ANI) which is an one hit wonder (like your car computer, or google indexing). It is strong because it had real congnition and motivation as opposed to an weak artificial general inteligence (wagi) which basicly simulates cognition and intent but has none.
More than that, it was a super AI because eventually it surpaced human inteligence by so many orders of magnitud that it became god like.

All of this is meant to eventually happen, but we are really further from an AGI which is what we would need for making that view of an audio interface to work.

(By the way, the reason why all the tech gurus are advising a huge amount of caution with AI, is for two reasons,

-An AGI would be able to displace most humans of their job.
-At the time of creation of the first AGI, everything may happen really fast, if you can create a machine that has the same level of cognition as an human, there is no limit to how this self improved machine could evolve by herself. This technology explosion, this singularity would constitute the most important human existencial threat ever.)

I would say, and I’m with the optimistics of the field, that we need at least 20 years for an AGI, it is not a lot of time but the first iteneration of real AR glasses is 1 or 2 years away from being lunched.

I also disagree with you because I imagine the differente kind of wearables as different parts of an ecosystem.

You may want to call Sarah, and using your airpods the call will be made throughout your AW instead of using your iPhone.

Or ask Siri, questions and complex tasks that eventually she will complete without making you feel stupid. This type of interaction are going to be really important.

But at the end, our eyes are the most important and eficiente way to receive and process information.

Even with AGI, visual cues won’t disappear...2D screens will be for sure substituted by new technology but you will still consume media and you may want to work as opposed to make an AGI work for you. For that you need to see and hear.

Magic leap interface demo:

Microsoft hololens, presently really bulky and because of that marketed for professional applications.
 
Last edited:
  • Like
Reactions: DeepIn2U

Mousesuck

macrumors regular
Original poster
Jun 19, 2017
132
44
Great technology that was inevitable to come to products.

The only problem with this is their showing us in a lab environment, as that's all they can for now. The problem I see ...

what people find natural when just talking with their hands and body language if they do/do not talk with their hands, can be mistakenly sensed as input UI. The other issue is across cultures what is seen as a Yes or No head nod in the western world does NOT necessarily mean that within say India. Those natively raised there a head nod we'd take as Yes/No is actually just an acknoledgement of hearing you and that's it. lol.

So the UI software would need to be catered for the region a product would be sold in, not unlike keyboard languages in our smartphones, tablets and computers. Unless of course we send a handbook of intructions of what gestures will work for UI input/manipulation: not unlike getting your first Palm Pilot lol.

I don’t think so, the video is filmed in the lab because It is a way to show how cutting edge their technology is. If you watch the I/O live presentation from 2 years ago, they showcase the technology live in front of everybody and It worked gloriously.

Besides project soli is going to start being comercialised as an developer kit later this year.

https://www.google.es/amp/www.pcwor...ure-development-kits-later-this-year.amp.html

In relation to how gestures change between cultures, although that is true, project soli only presents 4 gestures in their video, and this 4 gestures are cross cultural because in reality they aren’t gestures, they are actions, that is the beauty of It.

Pressing a button
button.gif


Dial
dial.gif


Slider
slider.gif


Joystick drag motion

soli-virtual-tools-xy-3.gif


This actions are common between every human, since they represent interactions with physical controls, while maintaining their physical quality since the contact between the fingers translates into force feedback.
 
Last edited:

DeepIn2U

macrumors G5
May 30, 2002
13,047
6,983
Toronto, Ontario, Canada
I don’t think so, the video is filmed in the lab because It is a way to show how cutting edge their technology is. If you watch the I/O live presentation from 2 years ago, they showcase the technology live in front of everybody and It worked gloriously.

Besides project soli is going to start being comercialised as an developer kit later this year.

https://www.google.es/amp/www.pcwor...ure-development-kits-later-this-year.amp.html

In relation to how gestures change between cultures, although that is true, project soli only presents 4 gestures in their video, and this 4 gestures are cross cultural because in reality they aren’t gestures, they are actions, that is the beauty of It.

Pressing a button
button.gif


Dial
dial.gif


Slider
slider.gif


Joystick drag motion

soli-virtual-tools-xy-3.gif


This actions are common between every human, since they represent interactions with physical controls, while maintaining their physical quality since the contact between the fingers translates into force feedback.


Actions are gestures.

Proof:
To press a button do you use two fingers or one? I use one!

To rotate a dial, do you reach under the dial or the dials head to move its actuation? Generally most people and most dials are actuated on the head (the top) not the steam (below).

Slider. Even when using a mouse for a virtual slider you're moving the slider itself not a virtual lever to move the slider.


This is exactly what I meant about the lab and also about people in what they do in the physical world is going to differ than what was presented. It sure you'll see my rebuttal correctly as intended.

If a dial is presented and I motion my hand in a circular motion in close proximity to show someone else, does the software make the mistake to actually move it?? Hmm, see I don't talk with my hands unless I'm in rage. Many others always talk with their hands so here is where a UI would have to be exceptionally done well to understand to forgive nactuonor a gesture vs a spurious unintended one-off each.
 

Mousesuck

macrumors regular
Original poster
Jun 19, 2017
132
44
Actions are gestures.

Proof:
To press a button do you use two fingers or one? I use one!

To rotate a dial, do you reach under the dial or the dials head to move its actuation? Generally most people and most dials are actuated on the head (the top) not the steam (below).

Slider. Even when using a mouse for a virtual slider you're moving the slider itself not a virtual lever to move the slider.


This is exactly what I meant about the lab and also about people in what they do in the physical world is going to differ than what was presented. It sure you'll see my rebuttal correctly as intended.

If a dial is presented and I motion my hand in a circular motion in close proximity to show someone else, does the software make the mistake to actually move it?? Hmm, see I don't talk with my hands unless I'm in rage. Many others always talk with their hands so here is where a UI would have to be exceptionally done well to understand to forgive nactuonor a gesture vs a spurious unintended one-off each.
I understand your line of thought but I disagree.

We should differenciate between what is a gesture, the expression of an idea or an emotion through the use of body movements (nodding, pointing, waving...)

And an operative action, the act of causing to function.

To cause something to function, you need to do a series of coordinated and timed actions, since only then does the movement translate into function. (Riding a bike, opening a door, pressing a button, moving a leaver)

When you say “to press a button you use two finger or one? Because I use one”, I would say that only depends on what button you are talking about. Since a button is defined as something you press and not, with what you press it.

This is what is called abstraction, the capacity we as humans have to understand a quality or idea apart from any specific object or
instance, as a concept.

That is why you can identify and understand the function of a chair independent of how many legs it has.

That is also why, in a certain location, pressing the index finger against the thumb to start an action is a button.

Everybody that in their lifetime as seen a button will understand that, because it is part of the abstract concept of button and your are in front of an operative action.

Operative actions are universal. A shiny sliding red door in USA works the same as a rusty sliding door in Pakistan. If a kid in a small village in Pakistan has never seen one, he will not know how to open it until somebody explains how or he tries. It will always work the same in any part of the world, as opposed to gestures that are a form of non verbal communication and are geograficaly and cultural locked.

I never tried soli and I believe you also haven’t, but the only thing that the UI has to do exceptionally well is to only react to what it is meant to react, as a sliding door that won’t react if you push it instead of sliding it.

Also, as far as I understood the gestures have to be done in the vecinity of the sensor, which is one more filter to undesirable responses, but yeah, I haven’t tried it so I can’t atest for the precision of this device.

What I’m sure, is that when you say that this actions are virtual, you aren’t right. (“moving a virtual lever to move a slider”) when in reality the action is as physical as it gets since the slider that you are moving is indeed your finger.

Thanks I like this kind of discussions.
 
Last edited:

sunapple

macrumors 68030
Jul 16, 2013
2,834
5,413
The Netherlands
The last thing I want to do is to try to memorize a bunch of abstract hand gestures. I'm also loathe to have to put on special glasses any time I want to do some technology-related task (which means I'd have to figure out what to do with the regular glasses I wear all the time now).

No, I think the next big step would be an all-audio interface, much like in the movie Her. There was barely a smartphone-type object in that film -- it looked more like a small, palm-sized address book -- and the desktop computers didn't have hardware keyboards.

The director's purpose of designing such technology in the film was to remove it from the moviegoer's experience so they could focus on the Siri-like computer voice ("Her") much like the main character did. The notion of such technology if it were real suggests that the devices' computing power would be strong enough that we wouldn't have to use keyboards and touch inputs to translate our intentions into computer-ready data, essentially continuing the evolution from punch cards to QWERTY keyboards, then to mice and trackpads, and, lately, to multitouch screens.

I'd even suggest that smartwatches might be the first casualties of an all-audio computer interface. If you had an earpiece that could serve as an extension of your phone, giving you all the same notifications that you get on your wrist now, a smartwatch could easily be seen as a redundant device.

In a sentence: I think the future of personal technology would be less like Minority Report and more like Her.

I love the tech from the movie Her. An all-audio interface is actually not hard to integrate in AirPods. In fact, I think it's very close to what we can achieve now before augmented reality goes mainstream in whatever form that may be.

In the meantime those address book phones would be cool, also seen in an ever cooler way in Westworld.

Devices will become less important as at some point any screen can become your computer in whatever form factor. Interesting is too see what will replace the screen.
 
  • Like
Reactions: Mousesuck

Mousesuck

macrumors regular
Original poster
Jun 19, 2017
132
44
I love the tech from the movie Her. An all-audio interface is actually not hard to integrate in AirPods. In fact, I think it's very close to what we can achieve now before augmented reality goes mainstream in whatever form that may be.

In the meantime those address book phones would be cool, also seen in an ever cooler way in Westworld.

Devices will become less important as at some point any screen can become your computer in whatever form factor. Interesting is too see what will replace the screen.

I agree with most part although as I already extensively wrote in reply to Barrackssi, doing an all-audio interface is an extremely complex task from which we are still very farway.

Very far, not because of the interface in itself (which we can already see some pieces of when we use Siri and would be fairly easy to implement) but because under that interface you would need to have an AGI (artificial general intelligence).

Even If you don’t take into account that in the movie Her, you have an strong artificial inteligence (with real consciousness and intent as opposed to weak, a simulation of those human qualities) an AGI is something that the most optimistic projections say it may happen with 62% certainty in 15/70% certainty in 20 years.

Of course you can argue that you could create an all-audio interface without a real AGI.
While being true, that is the same as saying you could create it using Siri...
-Siri send a mensage to X.
-Siri search for pictures of mountains and show me the first 5 ones.
-Siri play the “watchman” movie on Netflix
-Siri what is the price of this shoes (that you are pointing too with your iPhone AR camera)

Yeah, it doesn’t sound that cool does it?

Until the creation of an AGI, Siri won’t be able to talk to you as if she was your personal assistant, simulate the understanding of your intent, create relationships between the information she recalls and between the tasks she is asked to perform...

Although she will get much more useful, comercial AR glasses are between 5 months to 1 year away and they will use Siri to the best of her capacities.

But she albeit useful, will still be an insipid experience. At least until a major paradigm shift breakthrough that will change the face of the earth happens.

I’m all for foldable tablets like the ones from Westworld though. I believe that even with augmented reality there will still be space for physical screens, they just won’t be the center of media consumption anymore.

Also let’s not forget that when an agi is created, the foldable screens won’t be the only thing the real world will have in common with westworld.
 
Last edited:

BarracksSi

Suspended
Jul 14, 2015
3,902
2,664
The animated gifs showing "slider" and "joystick" are much too similar. I think you wrote a lot of nonsense (although I appreciate the effort).
 

Mousesuck

macrumors regular
Original poster
Jun 19, 2017
132
44
The animated gifs showing "slider" and "joystick" are much too similar. I think you wrote a lot of nonsense (although I appreciate the effort).

On that, I’ll agree. They are indeed very similar but I don’t see why that would be a problem...assuming of course that the system is sensible enough, as advertised, to differentiate between the two.

Now, on the subject of the “nonsense” I wrote, If you want sense you will have to make it yourself.
 
Last edited:

Mousesuck

macrumors regular
Original poster
Jun 19, 2017
132
44
Go back to my first post in this thread.

As I’ve extensively written in reply to you, your first post was a flagrant display of ignorance on the subject.

Although I also appreciated your effort, the fact that you can’t make sense of what I wrote, uncovers that you don’t even grasp the most basic concepts behind the topic of Artificial inteligence.

Therefor, the only thing I advise you to do is to instruct yourself on the subject, at least if you really want to defend your point of view. Otherwise you will look like a fool.
 
Last edited:

sunapple

macrumors 68030
Jul 16, 2013
2,834
5,413
The Netherlands
I agree with most part although as I already extensively wrote in reply to Barrackssi, doing an all-audio interface is an extremely complex task from which we are still very farway.

Very far, not because of the interface in itself (which we can already see some pieces of when we use Siri and would be fairly easy to implement) but because under that interface you would need to have an AGI (artificial general intelligence).

Even If you don’t take into account that in the movie Her, you have an strong artificial inteligence (with real consciousness and intent as opposed to weak, a simulation of those human qualities) an AGI is something that the most optimistic projections say it may happen with 62% certainty in 15/70% certainty in 20 years.

Of course you can argue that you could create an all-audio interface without a real AGI.
While being true, that is the same as saying you could create it using Siri...
-Siri send a mensage to X.
-Siri search for pictures of mountains and show me the first 5 ones.
-Siri play the “watchman” movie on Netflix
-Siri what is the price of this shoes (that you are pointing too with your iPhone AR camera)

Yeah, it doesn’t sound that cool does it?

Until the creation of an AGI, Siri won’t be able to talk to you as if she was your personal assistant, simulate the understanding of your intent, create relationships between the information she recalls and between the tasks she is asked to perform...

Although she will get much more useful, comercial AR glasses are between 5 months to 1 year away and they will use Siri to the best of her capacities.

But she albeit useful, will still be an insipid experience. At least until a major paradigm shift breakthrough that will change the face of the earth happens.

I’m all for foldable tablets like the ones from Westworld though. I believe that even with augmented reality there will still be space for physical screens, they just won’t be the center of media consumption anymore.

Also let’s not forget that when an agi is created, the foldable screens won’t be the only thing the real world will have in common with westworld.

Very interesting points, thanks! It's certain that any kind of next generation computing requires some form of AI and I'm curious to see how things will turn out.

It's very normal for people to have second thoughts about wearing AR glasses or using new ways of interaction with the UI, it is a large step from what we're used to. But of course so were the computer, the mouse and the touch screen.

I think key to learning any new interface is to make it easy to undo any mistakes you make and invite the user to explore and find out how it works on their own. Learning these gestures should not have to be hard or annoying when trying to go mainstream and during the devolpmemt you can make sure of that.

The biggest challenge might be to get people to actually wear those glasses. Even smart watches work best when they look like conventional watches because that is what people are used to. Maybe the Apple logo does the trick.
 
  • Like
Reactions: Mousesuck

Mousesuck

macrumors regular
Original poster
Jun 19, 2017
132
44
Very interesting points, thanks! It's certain that any kind of next generation computing requires some form of AI and I'm curious to see how things will turn out.

It's very normal for people to have second thoughts about wearing AR glasses or using new ways of interaction with the UI, it is a large step from what we're used to. But of course so were the computer, the mouse and the touch screen.

I think key to learning any new interface is to make it easy to undo any mistakes you make and invite the user to explore and find out how it works on their own. Learning these gestures should not have to be hard or annoying when trying to go mainstream and during the devolpmemt you can make sure of that.

The biggest challenge might be to get people to actually wear those glasses. Even smart watches work best when they look like conventional watches because that is what people are used to. Maybe the Apple logo does the trick.

I believe you are absolutely right, even if the technology behind AR glasses is absolutely transformative, since more than a gadget it is a wearable, it needs to be a top priority to make it fashionable and marketed like so.

Otherwise it is an idea that will be dificult to sell, maybe even under the Apple brand. Anyways, some problems are only solved with time.
 
Last edited:

sunapple

macrumors 68030
Jul 16, 2013
2,834
5,413
The Netherlands
I believe you are absolutely right, even if the technology behind AR glasses is absolutly transformative, since more than a gadget it is a wearable, it needs to be a top priority to make it fashionable and marketed like so.

Otherwise it is an idea that will be dificult to sell, maybe even under the Apple brand. Anyways, some problems are only solved with time.

Usually when launching a new kind of product like this the main problem would be explaining why people need it. I think Apple Watch however mainly sells because it's fashionable.

Really curious to see what these glasses will look like. At least they can learn from Google's mistakes.
 

Mousesuck

macrumors regular
Original poster
Jun 19, 2017
132
44
Usually when launching a new kind of product like this the main problem would be explaining why people need it. I think Apple Watch however mainly sells because it's fashionable.

Really curious to see what these glasses will look like. At least they can learn from Google's mistakes.
Indeed, to create a new trend you first have to create the need.

People will have to be exposed to this kind of technology and be persuaded that it is indeed the future that they all want.

For that, I believe Apple did a genius move with Arkit and the Iphone 8.
They are going to be the stepping stone from which the need will be created.

Happy or not with the current capabilities of Apple Watch, Apple managed to do something very incredible with this device...they convinced everyone that technology is fashionable or atleast potentially fashionable.

That idea, currently seeded in our minds will be of primal importance when they eventually ask us to put the glasses on.
 
Last edited:

BarracksSi

Suspended
Jul 14, 2015
3,902
2,664
As I’ve extensively written in reply to you, your first post was a flagrant display of ignorance on the subject.

What you were talking about is from a development lab point of view (as others said), which usually branches away from real-world usability. That's why Google Glass fell down so hard. It was a coders' project that had no social awareness whatsoever; not once did they suspect that random strangers wouldn't want a camera pointed at them.

Sure, stuff can seem cool in a lab when you've been piecing it together day after day. But take it to a regular Joe and the reaction is usually, "Wtf?"

That's my reality check for you.
 

sunapple

macrumors 68030
Jul 16, 2013
2,834
5,413
The Netherlands
What you were talking about is from a development lab point of view (as others said), which usually branches away from real-world usability. That's why Google Glass fell down so hard. It was a coders' project that had no social awareness whatsoever; not once did they suspect that random strangers wouldn't want a camera pointed at them.

Sure, stuff can seem cool in a lab when you've been piecing it together day after day. But take it to a regular Joe and the reaction is usually, "Wtf?"

That's my reality check for you.

You could say the same for an all audio based interface. Talking in public to Siri feels very weird to me as actions that were previously private on your phone's screen are now audible for your neighbors in the train.

For any new system we can assume there is a significant learning curve and that there's time needed for the public to adjust.
 

BarracksSi

Suspended
Jul 14, 2015
3,902
2,664
For any new system we can assume there is a significant learning curve and that there's time needed for the public to adjust.

Except that the progression we've seen over the last sixty years has been the opposite. Devices have gotten easier and easier for first-timers to use.

It must've been awfully nice to go from punch cards to a QWERTY keyboard, and everything since then has been able to reach more people because they're layering more human-like abstractions on top. Machine code is way, WAY down below an iPhone's touchscreen -- but it's still there, right? Yet the interaction model is natural enough that a toddler can accidentally navigate to the holiday pics of your mistress.

But, when we start talking about adding a learning curve again, we're doing it wrong.
 

Mousesuck

macrumors regular
Original poster
Jun 19, 2017
132
44
What you were talking about is from a development lab point of view (as others said), which usually branches away from real-world usability. That's why Google Glass fell down so hard. It was a coders' project that had no social awareness whatsoever; not once did they suspect that random strangers wouldn't want a camera pointed at them.

Sure, stuff can seem cool in a lab when you've been piecing it together day after day. But take it to a regular Joe and the reaction is usually, "Wtf?"

That's my reality check for you.

The weak nature of your argument makes me cringe.
Your use of anecdotal evidence to defend your point of view is laughable.

Ideas, good or bad, all are spawn in the same place, the development lab. There’s a bunch of people and they try to tackle the problem from their limited point of view with the tools and money they were given.

This is as much the story of Google Glasses, as it is the story of the Iphone. Some ideas fail, others not.

Google Glass didn’t fail “because It was a coders' project that had no social awareness whatsoever”.

That is not a variable in a dev lab, that is a fixed condition.
Google Glass failed because It was a bad developed project that didn’t take into account those factors.

Sure It can happen to soli...anything can happen. There is indeed a possibility that Project Soli may very well be vaporware.

But how the hell do you know?
Is it your opinion?

Because If it isn’t a fact it doesn’t hold any weight as an argument, It just holds weight as bull ****.

Anyways as you very well know, when I called you on your ignorance, I was pointing to your inability to understand the inherente difficulties behind your idea of an all audio interface.

The fact that you decided to not reference all the altercation, reveals that you know you were wrong and you don’t won’t to admit it.
 
Last edited:

Mousesuck

macrumors regular
Original poster
Jun 19, 2017
132
44
Except that the progression we've seen over the last sixty years has been the opposite. Devices have gotten easier and easier for first-timers to use.

It must've been awfully nice to go from punch cards to a QWERTY keyboard, and everything since then has been able to reach more people because they're layering more human-like abstractions on top. Machine code is way, WAY down below an iPhone's touchscreen -- but it's still there, right? Yet the interaction model is natural enough that a toddler can accidentally navigate to the holiday pics of your mistress.

But, when we start talking about adding a learning curve again, we're doing it wrong.

You fail to see the point. Indeed, from the last 60 years devices became easier and easier to use not only for the first time user but also for the experienced one.

That being said, they become more powerful. They are indeed simpler to use but the tasks to which you use their power are increasingly more complex.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.