[Documentation] [TitleIndex] [WordIndex

Only released in EOL distros:  

cob_object_perception: cob_leptonica | cob_object_detection_fake | cob_object_detection_msgs | cob_read_text | cob_tesseract

Package Summary

bases on literate_pr2 package from Menglong Zhu: menglong(at)seas.upenn.edu

Detailed Documentation


Cob_read_text detects and recognizes text in natural images. The system was trained for indoors household environment with Care-O-bot's stereo vision colour cameras. Text localisation and OCR occur separately and can be evaluated using precision, recall. For locating text on images Stroke Width Transformation is used. Text whose baseline was found is transformed back into horizontal text using Bézier curves and given to OCR software to recognize text.

To use read text clone the repository to your disc and build it.


Single Image

To read text on a single image: roslaunch cob_read_text read_text.launch image:=path_to_image

read_text.launch -> run_detection -> text_detect

Image Stream

To read text from a camera stream: roslaunch cob_read_text read_text_from_camera.launch

read_text_from_camera.launch -> text_detect

Remap "image_color" to topic that publishes to be processed images.

Evaluation Database

To read text on images of database with ground truth data: roslaunch cob_read_text read_text_with_eval.launch xmlfile:=path_to_xmlfile

read_text_with_eval.launch -> run_detection -> text_detect

Parameters for read_text

Parameters are specified in launch/params.yaml.


smoothImage (bool, default: false) maxStrokeWidthParameter (int, default: 50) useColorEdge (bool, default: false) cannyThreshold1 (int, default: 120) cannyThreshold2 (int, default: 50) compareGradientParameter (double, default: 1.57) swCompareParameter (double, default: 3.0) colorCompareParameter (int, default: 100) maxLetterHeight_ (int, default: 100) varianceParameter (double, default: 1.5) diagonalParameter (int, default: 10) pixelCountParameter (int, default: 5) innerLetterCandidatesParameter (int, default: 5) heightParameter (double, default: 2.5) clrComponentParameter (int, default: 20) distanceRatioParameter (double, default: 2.0) medianSwParameter (double, default: 2.5) diagonalRatioParameter (double, default: 2.0) grayClrParameter (double, default: 10.0) clrSingleParameter (double, default: 20.0) areaParameter (double, default: 1.5) pixelParameter (double, default: 0.3) p (double, default: 0.99) maxE (double, default: 0.7) minE (double, default: 0.4) bendParameter (int, default: 30) distaneParameter (double, default: 0.8) sigma_sharp (double, default: 1.0) threshold_sharp (double, default: 3.0) amount_sharp (double, default: 1.5) result_ (int, default: 2) showEdge (bool, default: false) showSWT (bool, default: false) showLetterCandidates (bool, default: false) showLetters (bool, default: false) showPairs (bool, default: false) showChains (bool, default: false) showBreakLines (bool, default: false) showBezier (bool, default: false) showRansac (bool, default: false) showNeighborMerging (bool, default: false) showResult (bool, default: false) transformImages (bool, default: true)

Additional Applications


To evaluate cob_read_text the application read_evaluation is used. Usage: read_evaluation takes a .xml-file containing a list of images with ground-truth bounding boxes enclosing every text that can be found.

Example input file:

<?xml version="1.0" encoding="UTF-8"?>
   <taggedRectangle center_x="605.894" center_y="636.016" width="94" height="40" angle="-20" text="Sprite" />
   <taggedRectangle center_x="752.5" center_y="626.5" width="47" height="25" angle="0" text="Red" />

All rectangles are described via:

For every image referenced between  in the .xml file, read_evaluation calls cob_read_text and compares the results with the solution.

The evaluation is based on the ICDAR 2005 Robust Reading Competitions system using recall and precision as measure: "Precision p is defined as the number of correct estimates divided by the total number of estimates. Recall r is defined as the number of correct estimates divided by the total number of targets. We define the match mp between two rectangles as the area of intersection divided by the area of the minimum bounding box containing both rectangles. For each rectangle in the set of estimates we find the closest match in the set of targets, and vice versa. Hence, the best match m(r, R) for a rectangle r in a set of Rectangles R is defined as: m(r, R) = max mp (r, r') | r' ∈ R Then, our new more forgiving definitions of precision and recall, where T and E are the sets of ground-truth and estimated rectangles respectively:

p = ( Σr_e ∈ Em(r_e, T) ) / |E|

r = ( Σr_t ∈ Tm(r_t, E) ) / |T|

(S. M. Lucas. ICDAR 2005 text locating competition results. In Proc. 8th Int’l Conf. Document Analysis and Recognition, volume 1, pages 80–84, 2005)

Results are saved in folder 'results' inside the image containing folder.

Running the application:

roslaunch cob_read_text read_text_with_eval.launch xmlfile:=path_to_xmlfile

The .xml-file has to be specified in read_text_with_eval.launch.


LabelBox can be used to generate a .xml-file for read_evaluation. Taking an image or a folder with images inside, LabelBox produces a .xml-file as seen above.

Running the application:

$(rospack find cob_read_text)/bin/labelBox path_to_image

$(rospack find cob_read_text)/bin/labelBox path_to_folder/

labelBox Help

Allowed file types: 'png', 'PNG', 'jpg', 'JPG', 'jpeg', 'JPEG', 'bmp', 'BMP', 'tiff', 'TIFF', 'tif' or 'TIF'.


bases on literate_pr2 package from Menglong Zhu: menglong(at)seas.upenn.edu

2020-03-28 12:32