This is “the future”, as Professor Andrew Blake, managing director of Microsoft Research in Cambridge, sees it. He specialises in the areas of machine learning and computer vision and was part of the team that worked to develop the technology behind Microsoft Kinect:
Gestures in the home
Blake says the same gesture technology used in Kinect could have other uses within the home – but what about inputting large amounts of information?
“That’s not how I think of this revolution. You can input a lot of information with a keyboard, of course, and also with speech. I think in a new way it’s not so much about a high level of information but about a light level of interaction – you’re getting a lot of leverage from the intelligence inside the machine.
“I’d actually like my DVD player to work in a NUI kind of way. The remote control doesn’t work for me; I can’t see it very well so I’d love it if I could just say ‘console wake up’ or something and a menu appears on the screen or something else I could gesture [at].
“In fact we can do a lot better than making a remote control appear on the screen. What would be much more fun would be to make intuitive signs such as one you’d use to stop traffic to stop the video and double finger pointing one way to go fast forward and things like that. What we don’t know is how reliably we can make those gestures interpretable.”
What would you like as interaction with your devices?