Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Next revisionBoth sides next revision
write_an_interface_to_your_perception_system [2013/01/07 09:54] – [KnowRob integration] tenorthwrite_an_interface_to_your_perception_system [2013/02/12 17:41] – external edit 127.0.0.1
Line 101: Line 101:
 </code> </code>
  
-If the dummy publisher is running, the following sequence +If the dummy publisher is running, the following sequence of commands starts the topic listener, queries for object instances and their poses. 
 <code> <code>
 ?- obj_detections_listener(L). ?- obj_detections_listener(L).
Line 117: Line 118:
 P = [1.0,0.0,0.0,2.9473,0.0,1.0,0.0,2.6113,0.0,0.0,1.0,0.2590,0.0,0.0,0.0,1.0]. P = [1.0,0.0,0.0,2.9473,0.0,1.0,0.0,2.6113,0.0,0.0,1.0,0.2590,0.0,0.0,0.0,1.0].
 </code> </code>
- 
- 
-startObjDetectionsListener() 
  
 ====== Interfacing service-based perception systems ====== ====== Interfacing service-based perception systems ======
Line 125: Line 123:
 ===== Perception service ===== ===== Perception service =====
  
-src/edu/tum/cs/ias/knowrob/tutorial/DummyService.java +The dummy perception service is very similar to the dummy publisher. Whenever a request for an object detection is received, it responds with a simulated detection of a random object type at a random pose. In real scenarios, the request will probably not be empty, but specify properties of the perception method to be used. The code of the dummy service can be found in the file src/edu/tum/cs/ias/knowrob/tutorial/DummyService.java. It can be started with the following command: 
 +<code> 
 +rosrun knowrob_perception_tutorial dummy_service 
 +</code>
  
 ===== Service client ===== ===== Service client =====
  
-src/edu/tum/cs/ias/knowrob/tutorial/DummyClient.java+The ''callObjDetectionService()'' method in the service client simply calls the dummy ROS service and returns the ''ObjectDetection'' message returned by the service call. The code can be found in the file src/edu/tum/cs/ias/knowrob/tutorial/DummyClient.java.
  
  
Line 136: Line 136:
 ===== KnowRob integration ===== ===== KnowRob integration =====
  
-implemented in prolog/perception_tutorial.pl+While the service client is much simpler than the topic listener, the integration with KnowRob is a bit more complex. The reason is that the service call needs to be actively triggered, while the topic listener just runs in the background in a separate thread. This means that the inference needs to be aware of the possibility of acquiring information about object detections by calling this service. Such functionality can be realized using [[define_computables|computables]] that describe for an OWL class or property how individuals of this class or values of this property can be computed. To better understand the following steps, it is recommended to have completed the [[define_computables|tutorial on defining computables]]
  
-integrated as computable prolog class +In a first step, we need to implement a Prolog predicate that performs the service call, processes the returned information, adds it to the knowledge base, and returns the identifiers of the detected object instances. This predicate is implemented in prolog/perception_tutorial.pl. In contrast to topic-based example above, which performed most processing in Java, this example shows how more of the processing can be done in Prolog:
- +
-in contrast to topic-based example, which performed most processing on the Java sidewe are doing more of the processing on the Prolog side+
  
 <code> <code>
-comp_object_detection(_ObjClass, ObjInst) :-+comp_object_detection(ObjInst, _ObjClass) :-
  
   % Call the DetectObject service for retrieving a new object detection.   % Call the DetectObject service for retrieving a new object detection.
Line 171: Line 169:
 </code> </code>
  
 +The predicate first calls the static ''callObjDetectionService'' method in the ''DummyClient'' class and receives an ''ObjectDetection'' object as result. It then reads its member variables (type, pose) and converts them from a quaternion into a pose matrix and into a Prolog list as row-based matrix representation. In the end, it calls the same ''create_object_perception'' predicate that was also used in the previous example.
  
 +This predicate can be used to manually query the perception service from Prolog (assuming the perception service is running in another terminal):
 +<code>
 +rosrun rosprolog rosprolog knowrob_perception_tutorial
 +?- comp_object_detection(Obj, _).
 +Obj = 'http://ias.cs.tum.edu/kb/knowrob.owl#Cup_vUXiHMJy'.
 +</code>
  
 +Such a manual query requires that the user is aware of the existence of this service. It also requires adaptation of the query whenever the context changes, e.g. when different or multiple recognition systems are used. We can avoid these problems by wrapping the predicate into a computable; with this definition, the predicate will automatically be called whenever the user asks for an object pose and when the service interface is available. The following OWL code defines a computable Prolog class for the example predicate:
  
-====== Adapting the examples to your system ======+<code> 
 +<computable:PrologClass rdf:about="#computeObjectDetections"> 
 +    <computable:command rdf:datatype="&xsd;string">comp_object_detection</computable:command> 
 +    <computable:cache rdf:datatype="&xsd;string">cache</computable:cache> 
 +    <computable:visible rdf:datatype="&xsd;string">unvisible</computable:visible> 
 +    <computable:target rdf:resource="&knowrob;HumanScaleObject"/> 
 +</computable:PrologClass>   
 +</code> 
 + 
 +Instead of calling the service directly, we can now query for object poses and obtain -- in addition to already known poses from e.g. a semantic map -- the poses generated by our service: 
 +<code> 
 +?- rdfs_instance_of(A, knowrob:'HumanScaleObject'). 
 +'http://ias.cs.tum.edu/kb/knowrob.owl#TableKnife_vUXiHMJy'
 +</code>
  
  
Line 180: Line 199:
 ====== Other kinds of perception systems ====== ====== Other kinds of perception systems ======
  
-In this tutorial, we have concentrated on object recognition as a special case of perception task. There are of course other perception tasks like the identification and pose estimation of humans, recognition and interpretation of spoken commands, etc. Most of these systems can however be interfaced in a very similar way: If they produce information continuously and asynchronously, a topic-based interface can be used. If they compute information on demand, the computable-based interface can be adapted.+In this tutorial, we have concentrated on object recognition as a special case of perception. There are of course other perception tasks like the identification and pose estimation of humans, recognition and interpretation of spoken commands, etc. Most of these systems can however be interfaced in a very similar way: If they produce information continuously and asynchronously, a topic-based interface can be used. If they compute information on demand, the computable-based interface can be adapted. 
 + 
 + 
 + 
 +====== Adapting the examples to your system ====== 
 + 
 +To keep the examples as simple and self-contained as possible, we have defined our own dummy components and messages. Your perception system will probably use slightly different messages and may provide more or less information. In this case, you will need to adapt the service client or topic listener to correctly extract information from your messages. After creating the object instance with ''create_object_perception'', you can use ''rdf_assert'' to add further properties to the object (e.g. color, weight, etc). 
 + 
 +