Site icon JetsonHacks

JetsonBot Part 7 – Autonomous Following – Vision Robot with a Create 2 Base

JetsonBot Follower

The first autonomous demonstration of the JetsonBot shows autonomous following behavior. Looky here:
YouTube Poster

Background

In most science fiction settings, robots are autonomous. The most famous movie robots all operate on their own; and most people assume that robots ‘think’ when they see them executing actions.

Most of the science fiction robots are not only autonomous, but also sentient, that is, self aware. Self aware in the sense that they are like some parallel type of life form, but made up of robotic bits instead of the stuff nature uses for organisms. In almost all of the stories, the robots almost always have evil bits built in. I’ve looked hard for those evil parts online, the electronic parts stores I’ve checked are out of stock.

I’ll let you in on a little secret. Robots are not currently sentient. Sure, they make great villains in movies and stories. They’re really fun to talk about and fear monger on, but it’s a really long way from where we are today to having anything close to what is portrayed in books and movies.

Another thing. Robots do not ‘think’ like people think, but people sometimes try to assign anthropomorphic behavior to robots. That’s because, well, that’s what people do.

But we do have autonomous robots today. You can think of the autonomous robot as a robot that performs behaviors or tasks with a high degree of autonomy, that is, on their own. You’ve seen manufacturing robots building cars and vacuum cleaner robots like the iRobot Roomba which are autonomous. However, everything that the autonomous robot does is from a preprogrammed set of instructions, or extremely closely derived thereof.

The Follower algorithm makes the JetsonBot autonomous. Start the JetsonBot up, load the Follower program, and off it goes. The robot appears to ‘follow’ a person around, all of its own accord.

Follower Implementation

The Follower is a standard TurtleBot demonstration. Here is a simplified explanation of the way that the algorithm works. There is a user-definable area in front of the RGBD camera in the shape of a box. The box is defined in real world space with a width, height and depth.

The RGBD depth stream is sent through ROS to Point Cloud Library (PCL) which calculates the centroid of the objects that are in the ‘box’. If there are no objects in the box, the robot is told to stop. Otherwise, the robot is told to position itself at a set distance away from the calculated centroid. Consequently, the robot appears to be ‘following’ a subject when they are walking away, or moving back from a subject when they move too close.

Note that the JetsonBot does not have sensors directed towards the back of the robot. It is possible to back the robot against a wall and have it keep trying to keep going backwards. As we’ve discussed before, there is an array of sensors and a bumper at the front of the robot which cuts power to the motors in a similar situation when the robot is moving forward. This makes sense, because the original vacuum cleaner almost always goes forward and only very rarely travels backwards. This points out that there are always things to think about in the general case, versus a more narrowly known specific case.

3D Sensor

The RGBD camera being used in the demo is an ASUS Xtion Live Pro which provides a RGB image stream and a depth stream. The depth stream is a bitmap where each ‘pixel’ in the depth map indicates the distance of that point away from the sensor. The technology used is similar to that of the original Microsoft Kinect.

Here’s a quick video on how to activate the 3D sensor under ROS. Looky here:

Unfortunately the Xtion is going out of production, probably because Apple acquired the company PrimeSense which was licensing the technology. That means that the 3D sensor is on the list of things to upgrade on the JetsonBot.

Off to adding Panel Antenna.

Exit mobile version
Skip to toolbar