Home / Gadgets / Project Tango can remember where it’s been, making AR work even when your device is distracted

Project Tango can remember where it’s been, making AR work even when your device is distracted


If you’re not familiar with Google’s Project Tango, it’s an image recognition platform, to put it simply. Using a special 3D camera, a Tango equipped device “looks” at the world around it, and uses that information to map a space, add in augmented reality objects, or simply track the location of a mobile device.

It has a wide variety of uses, even in its developer-oriented form, but there are some issues. For one, even the precision accelerometers and gyroscopes in mobile phones tend to have a margin of error, especially with vigorous movement. Add to that the way lighting changes, and user movement, and you might notice some drift or enlarging of objects that are supposed to stay still. Which brings us to the topic of today’s Google I/O talk by Tango team leader Wim Meeussen.

Learning its way around

The solution to object drift is area learning. Instead of constantly looking at its surroundings and calculating a new position, area learning allows a Tango device to recognize its surroundings. By building a memory of the world around it, the Tango device can recognize those spaces, and properly fit objects back into them.

Several demos served to illustrate how well this worked. A simple app, lacking area learning, placed a virtual box on a flat surface. After turning away from the podium and shaking the Tango powered device, the box began to slide around. Eventually, it disappeared, as the device entirely lost track of where the box should be.

Turning on area learning produced entirely different results. Not only did the box stay as the device shook and looked away, but even covering the camera for several seconds didn’t affect its positioning. Granted, it sometimes took a few seconds to re-identify the space around it, but once it did, the box popped right back up.

Memory makes for better multiplayer

Area learning really opens up the possibilities of VR and AR gaming with multiple users. Google says the Tango area learning code is easy to implement (capturing a location memory takes one line of code), and sharing it with other devices even easier (also just one line of code.)

For a multiplayer game, one device can capture a location memory of the space, and then export that location memory to other devices in the area. Working from the same model of the room, the devices are able to keep even better track of each other and their surroundings.

The results speak for themselves. In a video demo two users handed back and forth a Tango enabled tablet, with their heads and the device mirrored perfectly in a virtual rendering of the space. All of this takes place without establishing any infrastructure in the area, and with minimal code alterations.

What’s next?

If the results of area learning are any indication, Tango is coming along quite nicely. Object permanence, navigation, and multiplayer components are all a big part of the equation, and they must not be far from production. In fact, Meeussen made sure to remind everyone that Lenovo would be releasing a Tango-equipped device targeted at consumers later this year.

For now, eager AR users will either need to learn to develop, or wait patiently for the first device with Google’s powerful new technology.

Leave a Reply

Your email address will not be published. Required fields are marked *



Check Also

Apple Watch comparison: Which one is right for you?

We may earn revenue from the products available on this page and ...