[Documentation] [TitleIndex] [WordIndex

Only released in EOL distros:  


Package Summary

Completed as a graduate class project, this stack tracks a toy computer using 2D RGB information and can cause the PR2 to point at the object in 3D space using the Kinect's point cloud.


For our final Computer Vision project, our team was given the task of detecting a small toy computer (shown to the right). After discussing potential ways of tackling such a problem and noting the numerous computer vision techniques used to detect objects, we decided to baseour approach around classification. Although there are many out-of-the-box solutions that could be used to aid us ina vision based classification system, we wanted to instead use lower level processing techniques to build a more robust and original pipeline. The pipeline, which is discussed in detail in the final report, consists of a custom image segmentation algorithm and feature extraction. By doing so, we would be able to customize exact parameters to help us more accurately detect target objects.




Our project consisted of an overarching goal of accurately detecting the toy computer using the techniques mentioned above. To challenge ourselves to build a robust system, we decided to take this goal one step further. We began by noting that computer vision is an area of research that encompasses various areas of the sciences and engineering. Therefore, we wanted to build a system that would not only meet the main recognition objective, but, more importantly, we wanted to make practical use of this data. Thus, the decision was made to use the PR2 robot (seen left). Our end objective was to use the robot's Kinect RGB camera to accurately, confidently, and efficiently detect the image using only 2D image data. Once we had made a confident match, we would then use the Kinect's features to pinpoint a probable location of the object in 3D space. By converting this point into the robot's coordinate frame, we can then have the robot physically indicate the location of the object. This physical demo was the ultimate end goal of the project.

In hopes of releasing our code, we decided to make use of the widely used OpenCV code base. Furthermore, all of our code was implemented to be used within ROS.



The processor node reads in streams from the PR2's onboard cameras and attempts to detect the toy computer via multiple steps. Additional information is available in the project's writeup.

Subscribed Topics

/kinect_head/rgb/image_rect_color (sensor_msgs/Image) /kinect_head/depth_registered/points (sensor_msgs/PointCloud2)

Published Topics

/tf (tf/tfMessage)


The robot node moves the robot in response to the processor node. Additional information is available in the project's writeup.

Subscribed Topics

/move_left_arm/feedback (move_arm_msgs/MoveArmFeedback) /move_left_arm/status (move_arm_msgs/MoveArmStatus) /move_left_arm/result (move_arm_msgs/MoveArmResult) /move_right_arm/feedback (move_arm_msgs/MoveArmFeedback) /move_right_arm/status (move_arm_msgs/MoveArmStatus) /tf (tf/tfMessage)

Published Topics

/move_left_arm/goal (move_arm_msgs/MoveArmGoal) /move_left_arm/cancel (actionlib_msgs/GoalID) /move_right_arm/goal (move_arm_msgs/MoveArmGoal) /move_right_arm/cancel (actionlib_msgs/GoalID)


To install the rail_cv_project stack, you can choose to either install from source, or from the Ubuntu package:


To install from source, execute the following:

Ubuntu Package

To install the Ubuntu package, execute the following:


To run the project, a launch file is provided to start the necessary robot nodes. The actual processing node that detects the object can either be run on the PR2's onboard computer or on a local machine connected to the robot's roscore. It is important to note that the processor node will attempt to display several windows so X forwarding must be enabled on the robot's SSH connection if you decide to run the processor node on the robot.

Robot Nodes

To run the nodes that are needed to run on your PR2, after you have started your robot (i.e., robot start), run the following launch file:

Project Nodes

To run the nodes distributed with this project, run the following nodes in two separate terminals:

Demo Videos

The following is a set of videos that demo the full working system:


Please send bug reports to the GitHub Issue Tracker. Feel free to contact us at any point with questions and comments.



2020-01-25 13:02