Kinect to Add Vision Capability to a Robot

07 June 2013
By Dmitry Yakovlev, VP of Engineering

Kinect, a well-known sensor for the Xbox 360 console is usually associated with computer games. It allows people to interact with the console by body postures, gestures, and voice commands without any additional devices. DataArt engineers attached a Kinect controller to a robot to add vision capability – this is what this story will be about.

Part 1: Wi-Fi Addicted Machine

A while ago a group of colleagues proficient in embedded system built a robot capable of moving around under the remote control of an operator who observes the landscape via a camera installed on the top. The robot and operator communicate over a Wi-Fi connection.

The main trouble we faced with interaction between the operator and robot was the instability of wireless connection which could cause a loss control over the robot in areas with a weak signal. The most embarrassing was a situation when the 40-pound device declared “connection lost” in an elevator. After considering different options we decided to add vision and speech recognition capabilities to the robot, so it can be controlled by the human next to it with gestures and voice commands.

A quick investigation of computer vision technologies available on the market lead us to the Microsoft Kinect which takes the burden of recognizing human body in images and thus dramatically simplifying building natural user interfaces (NUI). For sure, the guys from Microsoft made an incredible thing – they brought the Kinect sensor to the market at a price under $200 – a cost that many consumers can afford. This technology, that had been previously available only inside research laboratories, greatly facilitates building NUI software.

We couldn’t miss this advantage!

Part 2: Robot Tracks Humans

The robot recognizes and executes all basic commands – taking and releasing control, moving forward and backward, rotating right and left. The commands can be given via gesture or voice.

To make driving the robot through interiors we implemented a “follow-me” mode. In this mode the robot moves itself in such a way to preserve the distance to the person and keep them in focus. As a result the robot follows the human like a dog on a lead.

Watch this video, to see our robot in action:


It’s not the end of the story. We are continuously modifying our device. Currently we implement features which will allow the robot to identify employees encountered in a hall, greet them, and remind them about scheduled meetings.

We are going to teach our fellow to detect and bypass obstacles on the way. And some other funny things you might imagine like going shopping with this iron buddy. Time will show.


Add Comment

Name Mail Website Comment