Detection¶
Using the different trained objects, we can now detect them.
Use¶
Just run the detection.py script in /apps. This will run continuously on the input image/point cloud.
./apps/detection -c detection.ork
The server requires a configuration file through the -c
option.
If you want continuous detection, you can just run the detection script:
rosrun object_recognition_core detection -c `rospack find object_recognition_tod`/conf/detection.ros.ork
Then again, there is also an actionlib server as detailed on actionlib server:
rosrun object_recognition_ros server -c `rospack find object_recognition_tod`/conf/detection.ros.ork
This will start a server with a given configuration file. If you want to test the server, just execute the client once:
rosrun object_recognition_ros client
You can also use roslaunch if you want traditional actionlib support. There is a config_file
argument
that can help you choose different pipelines:
roslaunch object_recognition_ros server.robot.launch
A typical command line session might look like:
% apps/detection -c `rospack find object_recognition_tod`/conf/detection.ros.ork
[ INFO] [1317692023.717854617]: Initialized ros. node_name: /ecto_node_1317692023710501315
Threadpool executing [unlimited] ticks in 5 threads.
[ INFO] [1317692024.254588151]: Subscribed to topic:/camera/rgb/camera_info with queue size of 0
[ INFO] [1317692024.255467268]: Subscribed to topic:/camera/depth_registered/camera_info with queue size of 0
[ INFO] [1317692024.256186358]: Subscribed to topic:/camera/depth_registered/image with queue size of 0
[ INFO] [1317692024.256863212]: Subscribed to topic:/camera/rgb/image_color with queue size of 0
model_id: e2449bdc43fd6d9dd646fcbcd012daaa
span: 0.433393 meters
1
***Starting object: 0
* starting RANSAC
added : 1
added : 0
* n inliers: 1824
[-0.056509789, 0.99800211, 0.028263446;
0.94346958, 0.062639669, -0.32548648;
-0.32660651, 0.0082725696, -0.94512439]
[-0.32655218; 0.03684178; 0.85040951]
********************* found 1poses
[ INFO] [1317692117.187226953]: publishing to topic:/object_ids
[ INFO] [1317692117.188155476]: publishing to topic:/poses
Command Line Interface¶
No module named PySide.QtCore
usage: detection [-h] [-c CONFIG_FILE] [--visualize] [--niter ITERATIONS]
[--shell] [--gui] [--logfile LOGFILE] [--graphviz]
[--dotfile DOTFILE] [--stats]
optional arguments:
-h, --help show this help message and exit
-c CONFIG_FILE, --config_file CONFIG_FILE
Config file
--visualize If set and the pipeline supports it, it will display
some windows with temporary results
Ecto runtime parameters:
--niter ITERATIONS Run the graph for niter iterations. 0 means run until
stopped by a cell or external forces. (default: 0)
--shell 'Bring up an ipython prompt, and execute
asynchronously.(default: False)
--gui Bring up a gui to help execute the plasm.
--logfile LOGFILE Log to the given file, use tail -f LOGFILE to see the
live output. May be useful in combination with --shell
--graphviz Show the graphviz of the plasm. (default: False)
--dotfile DOTFILE Output a graph in dot format to the given file. If no
file is given, no output will be generated. (default:
)
--stats Show the runtime statistics of the plasm.
Configuration File¶
The configuration file is where you define your graph and with the current ORK, you can choose any of the following sources:
No module named PySide.QtCore
source_0:
type: OpenNI
module: object_recognition_core.io.source
parameters:
# The number of frames per second for the depth image: [ecto_openni.FpsMode.FPS_60,
# ecto_openni.FpsMode.FPS_30, ecto_openni.FpsMode.FPS_15]
depth_fps: None
# The resolution for the depth image: [ecto_openni.ResolutionMode.QQVGA_RES,
# ecto_openni.ResolutionMode.CGA_RES, ecto_openni.ResolutionMode.QVGA_RES,
# ecto_openni.ResolutionMode.VGA_RES, ecto_openni.ResolutionMode.XGA_RES,
# ecto_openni.ResolutionMode.HD720P_RES, ecto_openni.ResolutionMode.SXGA_RES,
# ecto_openni.ResolutionMode.UXGA_RES, ecto_openni.ResolutionMode.HD1080P_RES]
depth_mode: None
# The number of frames per second for the RGB image: [ecto_openni.FpsMode.FPS_60,
# ecto_openni.FpsMode.FPS_30, ecto_openni.FpsMode.FPS_15]
image_fps: None
# The resolution for the RGB image: [ecto_openni.ResolutionMode.QQVGA_RES,
# ecto_openni.ResolutionMode.CGA_RES, ecto_openni.ResolutionMode.QVGA_RES,
# ecto_openni.ResolutionMode.VGA_RES, ecto_openni.ResolutionMode.XGA_RES,
# ecto_openni.ResolutionMode.HD720P_RES, ecto_openni.ResolutionMode.SXGA_RES,
# ecto_openni.ResolutionMode.UXGA_RES, ecto_openni.ResolutionMode.HD1080P_RES]
image_mode: None
# The stream mode: [ecto_openni.StreamMode.IR, ecto_openni.StreamMode.DEPTH,
# ecto_openni.StreamMode.DEPTH_IR, ecto_openni.StreamMode.RGB,
# ecto_openni.StreamMode.DEPTH_RGB]
stream_mode: None
source_1:
type: RosKinect
module: object_recognition_ros.io.source.ros_kinect
parameters:
# If the cropper cell is enabled
crop_enabled: True
# The ROS topic for the depth camera info.
depth_camera_info: /camera/depth_registered/camera_info
# The ROS topic for the depth image.
depth_image_topic: /camera/depth_registered/image_raw
# The ROS topic for the RGB camera info.
rgb_camera_info: /camera/rgb/camera_info
# The ROS topic for the RGB image.
rgb_image_topic: /camera/rgb/image_color
# The maximum x value (in the camera reference frame)
x_max: 3.40282346639e+38
# The minimum x value (in the camera reference frame)
x_min: -3.40282346639e+38
# The maximum y value (in the camera reference frame)
y_max: 3.40282346639e+38
# The minimum y value (in the camera reference frame)
y_min: -3.40282346639e+38
# The maximum z value (in the camera reference frame)
z_max: 3.40282346639e+38
# The minimum z value (in the camera reference frame)
z_min: -3.40282346639e+38
source_2:
type: BagReader
module: object_recognition_ros.io.source.bag_reader
parameters:
# The bag file name.
bag: data.bag
# If the cropper cell is enabled
crop_enabled: True
# The ROS topic for the depth camera info.
depth_camera_info: /camera/depth_registered/camera_info
# The ROS topic for the depth image.
depth_image_topic: /camera/depth_registered/image_raw
# The ROS topic for the RGB camera info.
rgb_camera_info: /camera/rgb/camera_info
# The ROS topic for the RGB image.
rgb_image_topic: /camera/rgb/image_color
# The maximum x value (in the camera reference frame)
x_max: 3.40282346639e+38
# The minimum x value (in the camera reference frame)
x_min: -3.40282346639e+38
# The maximum y value (in the camera reference frame)
y_max: 3.40282346639e+38
# The minimum y value (in the camera reference frame)
y_min: -3.40282346639e+38
# The maximum z value (in the camera reference frame)
z_max: 3.40282346639e+38
# The minimum z value (in the camera reference frame)
z_min: -3.40282346639e+38
any of the following sinks:
No module named PySide.QtCore
sink_0:
type: Publisher
module: object_recognition_ros.io.sink.publisher
parameters:
# The DB parameters
db_params: <object_recognition_core.boost.interface.ObjectDbParameters object at 0x7ff6a6968310>
# Determines if the topics will be latched.
latched: True
# The ROS topic to use for the marker array.
markers_topic: markers
# The ROS topic to use for the object meta info string
object_ids_topic: object_ids
# The ROS topic to use for the pose array.
pose_topic: poses
# Sets whether the point cloud clusters have to be published or not
publish_clusters: True
# The ROS topic to use for the recognized object
recognized_object_array_topic: recognized_object_array
sink_1:
type: GuessCsvWriter
module: object_recognition_core.io.sink
parameters:
# The run number
run_number: 0
# The name of the team to consider
team_name:
or the following pipelines:
No module named PySide.QtCore
detection_pipeline_0:
type: TransparentObjectsDetector
module: object_recognition_transparent_objects.detector
parameters:
# The DB configuration parameters as a JSON string
json_db:
# A set of object ids as a JSON string: '["1576f162347dbe1f95bd675d3c00ec6a"]' or 'all'
json_object_ids: all
# The method the models were computed with
method: TransparentObjects
# The DB parameters
object_db: None
# The filename of the registration mask.
registrationMaskFilename:
# Visualize results
visualize: False
detection_pipeline_1:
type: TodDetector
module: object_recognition_tod.detector
parameters:
# The DB to get data from as a JSON string
json_db: {}
# Parameters for the descriptor as a JSON string. It should have the format:
# "{"type":"ORB/SIFT whatever", "module":"where_it_is", "param_1":val1, ....}
json_descriptor_params: {"type": "ORB", "module": "ecto_opencv.features2d"}
# Parameters for the feature as a JSON string. It should have the format: "{"type":"ORB/SIFT
# whatever", "module":"where_it_is", "param_1":val1, ....}
json_feature_params: {"type": "ORB", "module": "ecto_opencv.features2d"}
# The ids of the objects to find as a JSON list or the keyword "all".
json_object_ids: all
# Minimum number of inliers
min_inliers: 15
# Number of RANSAC iterations.
n_ransac_iterations: 1000
# The search parameters as a JSON string
search: {}
# The error (in meters) from the Kinect
sensor_error: 0.00999999977648
# If true, some windows pop up to see the progress
visualize: False
More of any of those can be added by the user obviously