Can Google Fix Gesture Tech With Tiny, All-Knowing Sensors?

These chips are so powerful, they can read the tiniest of movements. It's as amazing as it is terrifying.
MG7175
The tiny Soli sensor uses radar to watch your fingers move.Peter McCollough for WIRED

Ivan Poupyrev looks like he's playing the world's smallest violin. Inside a conference room at Google's San Francisco office, there's a screen in front of him displaying raw output data from a tiny sensor just below his hand. He moves his thumb up and down against his finger, at first quickly and then almost imperceptibly. Each time, the blue dot on the screen moves along with his finger. He flips to a new demo, and now he's making a circle with his thumb. The faster he goes, the faster the blue dot spins.

This is Project Soli, which Poupyrev has been working on inside Google's top-secret ATAP division. (He's also working on a crazy-ambitious plan for getting conductive textiles into everything we own and wear---Soli is his “side project.”) It's a tiny chip, one he hopes will soon be remarkably easy to add to nearly any device: inside the frame of a VR helmet, the bezel of a smartwatch, the chassis of your phone.

Poupyrev excitedly hands me a nine-millimeter-squared chip, then quickly chastises me when I try to pick it up. There are only a couple of these in the world, as of last week---that was when Soli went from great idea to the beginning of something very real. The chip, specifically, is 60GHz RF transmitters in a piece of silicon---and it was built in a mere 10 months. It's modular, meaning it can be placed anywhere. And instead of shooting one beam and collecting a single point of data, the Soli chips create a wide radar bulb, designed to be big enough to see your entire hand all at once.

It can detect movement of less than a millimeter---you hold your hand as still as possible, and it still sees huge motion. 3,000 times a second, it collects information about where your hand and fingers are, and what they're doing. It only cares about motion---like an eagle, Poupyrev says, all Soli knows is that there's something moving. And through Google's machine-learning algorithms, it's beginning to see gestures in what it captures. More importantly, it can see and recognize the tiniest, most specific of movements, and translate that to your gadgets.

Project Soli revolves around two ideas: One, hands are a great input device. Two, every existing gesture control system sucks. "In the real world, we are very good with our hands," Poupyrev says. He points to a graph showing how our fingers are uniquely tightly connected to our motor cortex. "So why can we use our hands in the real world, and not with technology?"

Ivan Poupyrev, ATAP Technical Project Lead.

Peter McCollough for WIRED

But we do use our hands to control technology, we just often choose not to because it's awkward. Poupyrev waves this off. "We quite often think about interaction with your hand as a gesture interaction—OK sign, V sign— but that’s meaningless." Primitive hand signals are fine for communicating with people, he says, but it doesn't make sense with computers. "That’s not how we interact with tools. And a computer’s a tool. We should not think about the computer as another human being, but as a tool, and we should apply gesture language from the tool interaction to using your computer, rather than the human-to-human interaction."

Forget big gestures, he says. Think about the tiny ones. The way you swipe a touchscreen, twist a knob on your stereo, or scroll your finger around the iPod's touch wheel. With the right tracking system, you could flip a light switch without the switch, or turn up the volume on the speakers without actually touching them just by sliding your finger. The gestures don't have to be huge and exaggerated: You don’t need to wave like a madman in front of your Kinect. They can be as small as they are in real life.

Right now, gesture technology depends on exaggeration. Camera-based systems, like Leap Motion or Intel's RealSense, are big and slow, and can't see through walls or at night. Capacitive sensors are great for touch, but not seeing in three dimensions; when you cross your fingers they fall apart.

The challenge of Soli was to shrink radar into the size of a computer chip.

Peter McCollough for WIRED

Radar, on the other hand, perfectly suited Poupyrev's needs. "3D, super precise, overlap motions of fingers, doesn’t matter. Works through materials, day and night. Problem is, it doesn’t fit into a watch." Project Soli didn’t want to reinvent radar, just shrink it.

The technology is way ahead of the implementation. The data is rich and detailed, but there's not exactly an obvious way to teach people how to screw an air screwdriver or press an air button.

Soli is designed to be modular, so the antennas can go anywhere.

Peter McCollough for WIRED

One use case Poupyrev is thinking about is abstracted watch crowns---you could just twist your fingers like you're spinning a watch crown, and change the time without touching anything. If that sounds overly conceptual, Poupyrev agrees. "We have a whole bunch of ideas about what we can do," he says, "but we need a product. We need a hero product."

The possibilities are endless, and equal parts impressive and terrifying. These chips can collect so much data, so often, so fast, that Poupyrev thinks they can detect almost anything. With the right algorithms and software, anyway---and that's what Google does best.

"Can it see people walking? Yes it can. Can it see people breathing? Yes we can. Can you see if it’s grandpa or grandma? Well, probably, because grandpa and grandma probably walk differently. Just like we distinguish your hand, we distinguish people." It can see if someone's breathing, even if they're underneath a blanket. It can see through walls, through rain, through darkness. And all it needs is for you to just sort of, barely move two fingers.