Greetings, cadets! We’ve built a space-tacular robotics demo and we’ll be showing it off from March 17th-20th at Nvidia’s GPU Conference in San Jose, CA.
Our demo shows how 3D models provide robots with a detailed understanding of the real world.
When shared with machines, 3D models serve as a robot’s map and provide a way for it to pinpoint its exact location in the world (called localization). With this understanding, robots can interact intelligently with their environments in real-time to do things like: navigate rooms and entire buildings, identify boundaries, avoid obstacles, and even play interactive games with us human folk.
Currently, robots have to use a combo of bump-sensors, laser-scanners, and other sensors to navigate indoors, since GPS doesn’t work well inside buildings. These methods are limited however, since they don’t allow for path planning (or they are really complicated and expensive).
We’re working to solve the robotic navigation problem by handing machines a detailed 3D view of the world to help them perceive it better. Using depth sensors, like the ones found in Microsoft’s Kinect, Google’s Tango Tablet or the Intel Real Sense, people and robots can scan houses and entire buildings to capture 3D data. Once that 3D data is collected, we send it to our cloud for reconstruction and we end up with a 3D model.
When a machine has that model, it has a deeper understanding of the physical world. If a robot understands the boundaries of the space it’s in and the obstacles in its path, then it can calculate the most efficient routes to take.
So basically, our technology is really useful for our robot friends. Soon enough, our office bots will be able to fetch us LaCroix cans from the fridge (which is really the main reason we’re working on this).
If you’re dying to see this demo, come see it live at Nvidia's GPU Conference. Check out our original #GTC15 announcement to grab a promo code for 20% off registration.