Cells API

object_recognition_core.ecto_cells.db

ObservationInserter

Brief doc

Inserts observations into the database.

Parameters

  • db_params   type: object_recognition_core::db::ObjectDbParameters    not required   default: [not visible from python]

    The database parameters

  • object_id   type: std::string    required   no default value

    The object id, to associate this frame with.

  • session_id   type: std::string    required   no default value

    The session id, to associate this frame with.

Inputs

  • K   type: cv::Mat   

    The camera intrinsic matrix

  • R   type: cv::Mat   

    The orientation.

  • T   type: cv::Mat   

    The translation.

  • depth   type: cv::Mat   

    The 16bit depth image.

  • frame_number   type: int   

    The frame number

  • image   type: cv::Mat   

    An rgb full frame image.

  • mask   type: cv::Mat   

    The mask.

ModelWriter

Brief doc

Takes a document, that should be considered as a Model, and persists it. Also stores common meta data that is useful for searching.

Parameters

  • json_params   type: std::string    required   default:

    The non-discriminative parameters used, as JSON.

  • method   type: std::string    required   default:

    The method used to compute the model (e.g. ‘TOD’ ...).

Inputs

  • db_document   type: object_recognition_core::db::Document   

  • json_db   type: std::string   

    The DB parameters

  • object_id   type: std::string   

    The object id, to associate this model with.

ObservationReader

Brief doc

Reads observations from the database.

Inputs

  • document   type: object_recognition_core::db::Document   

    The observation id to load.

Outputs

  • K   type: cv::Mat   

    The camera intrinsic matrix

  • R   type: cv::Mat   

    The orientation.

  • T   type: cv::Mat   

    The translation.

  • depth   type: cv::Mat   

    The 16bit depth image.

  • frame_number   type: int   

    The frame number

  • image   type: cv::Mat   

    An rgb full frame image.

  • mask   type: cv::Mat   

    The mask.

object_recognition_core.ecto_cells.io

GuessTerminalWriter

Brief doc

Given guesses, writes them to the terminal.

Parameters

  • base_directory   type: std::string    not required   no default value

    Base directory

  • config_file   type: std::string    not required   no default value

    Configuration file

Inputs

  • pose_results   type: std::vector<object_recognition_core::common::PoseResult, std::allocator<object_recognition_core::common::PoseResult> >   

    The results of object recognition

GuessCsvWriter

Brief doc

Given guesses, writes them to a CSV in the NIST format.

Parameters

  • run_number   type: int    not required   no default value

    The run number

  • team_name   type: std::string    not required   no default value

    The name of the team to consider

Inputs

  • pose_results   type: std::vector<object_recognition_core::common::PoseResult, std::allocator<object_recognition_core::common::PoseResult> >   

    The results of object recognition

PipelineInfo

Brief doc

Spits out the parameters given as a JSON sting.

Parameters

  • parameters   type: std::string    not required   default:

    The JSON parameters of the pipeline.

Outputs

  • parameters   type: or_json::Value_impl<or_json::Config_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > >   

    The parameters as a JSON dict.

  • parameters_str   type: std::string   

    The parameters as a JSON string.

object_recognition_ros.ecto_cells.io_ros

MsgAssembler

Brief doc

Given object ids and poses, fill the object recognition message.

Parameters

  • publish_clusters   type: bool    not required   default: True

    Sets whether the point cloud clusters have to be published or not

Inputs

  • frame_id   type: std::string   

    The frame_id where the objects are seen. It can be obtained from image_message too.

  • image_message   type: boost::shared_ptr<sensor_msgs::Image_<std::allocator<void> > const>   

    The image message to get the header

  • pose_results   type: std::vector<object_recognition_core::common::PoseResult, std::allocator<object_recognition_core::common::PoseResult> >   

    The results of object recognition

Outputs

  • msg   type: boost::shared_ptr<object_recognition_msgs::RecognizedObjectArray_<std::allocator<void> > const>   

    The poses

Publisher_MarkerArray

Brief doc

Publishes a visualization_msgs::MarkerArray.

Parameters

  • latched   type: bool    not required   default: False

    Is this a latched topic?

  • queue_size   type: int    not required   default: 2

    The amount to buffer incoming messages.

  • topic_name   type: std::string    required   default: /ros/topic/name

    The topic name to publish to. May be remapped.

Inputs

  • input   type: boost::shared_ptr<visualization_msgs::MarkerArray_<std::allocator<void> > const>   

    The message to publish.

Outputs

  • has_subscribers   type: bool   

    Has currently connected subscribers.

Subscriber_Marker

Brief doc

Subscribes to a visualization_msgs::Marker.

Parameters

  • queue_size   type: int    not required   default: 2

    The amount to buffer incoming messages.

  • tcp_nodelay   type: bool    not required   default: False

    Enable/disable nagle’s algorithm on bundling small packets together.

  • topic_name   type: std::string    required   default: /ros/topic/name

    The topic name to subscribe to.

Outputs

  • output   type: boost::shared_ptr<visualization_msgs::Marker_<std::allocator<void> > const>   

    The received message.

Subscriber_MarkerArray

Brief doc

Subscribes to a visualization_msgs::MarkerArray.

Parameters

  • queue_size   type: int    not required   default: 2

    The amount to buffer incoming messages.

  • tcp_nodelay   type: bool    not required   default: False

    Enable/disable nagle’s algorithm on bundling small packets together.

  • topic_name   type: std::string    required   default: /ros/topic/name

    The topic name to subscribe to.

Outputs

  • output   type: boost::shared_ptr<visualization_msgs::MarkerArray_<std::allocator<void> > const>   

    The received message.

Bagger_MarkerArray

Brief doc

A bagger for messages of a given type. Can enable read/write to ros bags.

Parameters

  • bagger   type: boost::shared_ptr<ecto_ros::Bagger_base const>    not required   default: [not visible from python]

    The bagger.

  • topic_name   type: std::string    required   default: /ros/topic/name

    The topic name to subscribe to.

Bagger_Marker

Brief doc

A bagger for messages of a given type. Can enable read/write to ros bags.

Parameters

  • bagger   type: boost::shared_ptr<ecto_ros::Bagger_base const>    not required   default: [not visible from python]

    The bagger.

  • topic_name   type: std::string    required   default: /ros/topic/name

    The topic name to subscribe to.

Publisher_Marker

Brief doc

Publishes a visualization_msgs::Marker.

Parameters

  • latched   type: bool    not required   default: False

    Is this a latched topic?

  • queue_size   type: int    not required   default: 2

    The amount to buffer incoming messages.

  • topic_name   type: std::string    required   default: /ros/topic/name

    The topic name to publish to. May be remapped.

Inputs

  • input   type: boost::shared_ptr<visualization_msgs::Marker_<std::allocator<void> > const>   

    The message to publish.

Outputs

  • has_subscribers   type: bool   

    Has currently connected subscribers.

object_recognition_core.ecto_cells.filters

depth_filter

Brief doc

Given a depth image, return the mask of what is between two depths.

Parameters

  • d_max   type: float    not required   default: 3.40282346639e+38

    The maximal distance at which object become interesting (in meters)

  • d_min   type: float    not required   default: -3.40282346639e+38

    The minimal distance at which object become interesting (in meters)

Inputs

  • points3d   type: cv::Mat   

    The 3d points: width by height by 3 channels

Outputs

  • mask   type: cv::Mat   

    The mask of what is within the depth range in the image

object_recognition_core.ecto_cells.voter

Aggregator

Brief doc

Simply aggregates the results from several pipelines

Parameters

  • n_inputs   type: unsigned int    required   no default value

    Number of inputs to AND together

Outputs

  • pose_results   type: std::vector<object_recognition_core::common::PoseResult, std::allocator<object_recognition_core::common::PoseResult> >   

    The results of object recognition