To hear technology critics tell the story, not much has changed with the so-called user interface since the early days of the Mac. Point and click is still point and click. Except for multitouch– touch, tap, or slide. More people on planet earth use the smartphone and tablet, sans a mouse or trackpad, than use a personal computer.
If not much has changed since the early days of point and click, and little has changed on the smartphone since it was re-invented by Apple in 2007, what will be the user interface of the future?
Who will change the course of technology? Apple? Google? Microsoft?
Without question, all three giants– and many independent thinkers and organizations– are working on the next great user interface to replace multitouch, and point and click. Google is working on a new technology that aims to eliminate touch.
Wave hello to Project Soli.
Soli is a new sensing technology that uses miniature radar to detect touchless gesture interactions.
Intriguing, no?
Just as Face ID uses hundreds or thousands of dots of lights to map a face, Soli will use a constantly scanning radar to monitor gestures. Use your hands instead of a mouse or a touchscreen.
How does it work?
Soli is a purpose-built interaction sensor that uses radar for motion tracking of the human hand.
A radar-like sensor array is stuffed into a chip set about the size of a dime. Those sensors can detect specific movements and attach them to actions or functions. For example, in the physical world we put a key into a lock and turn it to open or turn the opposite way to unlock it.
Imagine the same thing but just using the motion. Soli’s radar will recognize the key turning action and, depending upon the objective, unlock something.
Virtual Tools are gestures that mimic familiar interactions with physical tools. This metaphor makes it easier to communicate, learn, and remember Soli interactions.
Instead of physical interaction, merely mimicking an action with fingers or hand would produce a similar result on the device’s screen.
I’m skeptical.
Objects within the beam scatter this energy, reflecting some portion back towards the radar antenna. Properties of the reflected signal, such as energy, time delay, and frequency shift capture rich information about the object’s characteristics and dynamics, including size, shape, orientation, material, distance, and velocity.
I’m still skeptical.
Why?
The touch of an onscreen button is very specific to an action. If you don’t touch it, nothing happens. Everyone must touch the button the same way. Hand and finger gestures or movements are different between each person; already we have accidental touches on our touchscreen devices.
Where will Soli be employed?
The Soli chip can be embedded in wearables, phones, computers, cars and IoT devices in our environment.
Translation: Anywhere it works.
Soli has no moving parts, it fits onto a chip and consumes little energy. It is not affected by light conditions and it works through most materials. Just imagine the possibilities…
I’m still working on the possibilities, but then again, my genes originate in The Show Me State.