Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
write_an_interface_to_your_perception_system [2013/01/07 10:04] – [Service client] tenorth | write_an_interface_to_your_perception_system [2014/06/05 11:38] (current) – external edit 127.0.0.1 | ||
---|---|---|---|
Line 1: | Line 1: | ||
- | ====== Write an interface to your perception system ====== | + | #REDIRECT doc:writing_an_interface_to_your_perception_system |
- | + | ||
- | There are two main approaches how perception can be performed: Some perception algorithms continuously detect objects and output the results (in ROS terminology: | + | |
- | + | ||
- | In this tutorial, we explain on two minimal examples how to write interfaces to these two kinds of perception systems. Currently, there is no ' | + | |
- | + | ||
- | Before starting with the tutorial, it is important to first understand how [[object_pose_representation|object detections]] are represented in KnowRob. Further information on this topic can be found in Sections 3.2 and 6.1 in http:// | + | |
- | + | ||
- | ====== Setting up the perception tutorial ====== | + | |
- | + | ||
- | The knowrob_perception tutorial is part of the knowrob_tutorials repository. You need to check it out into your ROS workspace (i.e. into a directory that is part of your ROS_PACKAGE_PATH). | + | |
- | https:// | + | |
- | < | + | |
- | git clone https:// | + | |
- | </ | + | |
- | + | ||
- | After the checkout, you should be able to '' | + | |
- | + | ||
- | < | + | |
- | roscd knowrob_perception | + | |
- | rosmake | + | |
- | </ | + | |
- | ====== Interfacing topic-based perception systems ====== | + | |
- | + | ||
- | ===== Publisher ===== | + | |
- | + | ||
- | The file src/ | + | |
- | < | + | |
- | rosrun knowrob_perception_tutorial dummy_publisher | + | |
- | </ | + | |
- | + | ||
- | Once the publisher is running, you can have a look at the generated object poses by calling the following command from a different terminal. | + | |
- | < | + | |
- | rostopic echo / | + | |
- | </ | + | |
- | It should output messages of the following form: | + | |
- | < | + | |
- | type: DinnerFork | + | |
- | pose: | + | |
- | header: | + | |
- | seq: 0 | + | |
- | stamp: | + | |
- | secs: 1357547989 | + | |
- | nsecs: 196672575 | + | |
- | frame_id: map | + | |
- | pose: | + | |
- | position: | + | |
- | x: 0.300724629488 | + | |
- | y: 2.96134330258 | + | |
- | z: 1.56672560148 | + | |
- | orientation: | + | |
- | x: 0.0 | + | |
- | y: 0.0 | + | |
- | z: 0.0 | + | |
- | w: 1.0 | + | |
- | </ | + | |
- | ===== Subscriber ===== | + | |
- | + | ||
- | The counterpart on the client side that consumes the object detections is implemented in the file '' | + | |
- | + | ||
- | The following code snippet is the main part of the '' | + | |
- | + | ||
- | < | + | |
- | while (n.isValid()) { | + | |
- | + | ||
- | obj = callback.pop(); | + | |
- | + | ||
- | Matrix4d p = quaternionToMatrix(obj.pose.pose); | + | |
- | String q = " | + | |
- | "' | + | |
- | + p.m00 + "," | + | |
- | + p.m10 + "," | + | |
- | + p.m20 + "," | + | |
- | + p.m30 + "," | + | |
- | "], [' | + | |
- | + | ||
- | PrologInterface.executeQuery(q); | + | |
- | n.spinOnce(); | + | |
- | } | + | |
- | </ | + | |
- | + | ||
- | + | ||
- | This predicate, defined in the knowrob_perception package, is defined as below. It creates a new object instance for the given object type ('' | + | |
- | + | ||
- | < | + | |
- | create_object_perception(ObjClass, | + | |
- | rdf_instance_from_class(ObjClass, | + | |
- | create_perception_instance(PerceptionTypes, | + | |
- | set_object_perception(ObjInst, | + | |
- | set_perception_pose(Perception, | + | |
- | </ | + | |
- | + | ||
- | ===== KnowRob integration ===== | + | |
- | + | ||
- | Whenever the subscriber is started, it creates the KnowRob-internal representations for all object detections that are received via the topic. It can be run from Prolog via the [[http:// | + | |
- | + | ||
- | < | + | |
- | obj_detections_listener(Listener) :- | + | |
- | jpl_new(' | + | |
- | jpl_call(Listener, | + | |
- | </ | + | |
- | + | ||
- | If the dummy publisher is running, the following sequence of commands starts the topic listener, queries for object instances and their poses. | + | |
- | + | ||
- | < | + | |
- | ?- obj_detections_listener(L). | + | |
- | L = @' | + | |
- | Attaching 0x8afd1010 | + | |
- | + | ||
- | <wait for a few seconds...> | + | |
- | + | ||
- | ?- owl_individual_of(A, | + | |
- | A = ' | + | |
- | A = ' | + | |
- | A = ' | + | |
- | + | ||
- | ?- current_object_pose(' | + | |
- | P = [1.0, | + | |
- | </ | + | |
- | + | ||
- | ====== Interfacing service-based perception systems ====== | + | |
- | + | ||
- | ===== Perception service ===== | + | |
- | + | ||
- | The dummy perception service is very similar to the dummy publisher. Whenever a request for an object detection is received, it responds with a simulated detection of a random object type at a random pose. In real scenarios, the request will probably not be empty, but specify properties of the perception method to be used. The code of the dummy service can be found in the file src/ | + | |
- | < | + | |
- | rosrun knowrob_perception_tutorial dummy_service | + | |
- | </ | + | |
- | + | ||
- | ===== Service client ===== | + | |
- | + | ||
- | The '' | + | |
- | + | ||
- | + | ||
- | + | ||
- | ===== KnowRob integration ===== | + | |
- | + | ||
- | implemented in prolog/ | + | |
- | + | ||
- | integrated as computable prolog class | + | |
- | + | ||
- | in contrast to topic-based example, which performed most processing on the Java side, we are doing more of the processing on the Prolog side | + | |
- | + | ||
- | < | + | |
- | comp_object_detection(_ObjClass, | + | |
- | + | ||
- | % Call the DetectObject service for retrieving a new object detection. | + | |
- | % The method returns a reference to the Java ObjectDetection message object | + | |
- | jpl_call(' | + | |
- | + | ||
- | + | ||
- | % Read information from the ObjectDetection object | + | |
- | + | ||
- | % Read type -> simple string; combine with KnowRob namespace | + | |
- | jpl_get(ObjectDetection, | + | |
- | atom_concat(' | + | |
- | + | ||
- | + | ||
- | % Read pose -> convert from quaternion to pose list | + | |
- | jpl_get(ObjectDetection, | + | |
- | jpl_get(PoseStamped, | + | |
- | + | ||
- | jpl_call(' | + | |
- | knowrob_coordinates: | + | |
- | + | ||
- | + | ||
- | % Create the object representations in the knowledge base | + | |
- | % The third argument is the type of object perception describing | + | |
- | % the method how the object has been detected | + | |
- | create_object_perception(Type, | + | |
- | </ | + | |
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | ====== Adapting the examples to your system ====== | + | |
- | + | ||
- | + | ||
- | + | ||
- | ====== Other kinds of perception systems ====== | + | |
- | + | ||
- | In this tutorial, we have concentrated on object recognition as a special case of a perception task. There are of course other perception tasks like the identification and pose estimation of humans, recognition and interpretation of spoken commands, etc. Most of these systems can however be interfaced in a very similar way: If they produce information continuously and asynchronously, | + | |
- | + |