Robots continuously have to deal with coordinate systems, and while specialized systems exist for managing coordinate transformations, there are several use cases in which also the knowledge base should be aware of the coordinate systems used for describing spatial information
When robots detect an object and estimate its location, they usually do this in robot-centric coordinates, i.e. relative to their camera or their base frame. Converting between different robot-intrinsic coordinate frames is supported by e.g. the tf library. The problem is that the transformations in the tf system need to be continuously re-published and are only valid for a few seconds.
Today's robots can have different maps of the same environment: Often, a 2D laser-scanner map is used for self-localization, while three-dimensional maps semantically describe the objects in the environment. All of them have potentially different coordinate systems with different origins. There are different options how they can be integrated:
When describing the spatial configuration of an object type, for example the positions of a hinge and a handle of a cupboard, on the level of an object class, these poses need to be relative to the object's origin. Once this object is detected, the system creates an instance of it and transforms these relative coordinates into global coordinates in the map.
The opposite case is also possible, especially in the RoboEarth context: The robot has detected the different parts of an object, estimated its articulation properties, and would like to export this information to make it available to other robots. In order to be able to use this model in a different environment, the environment-specific global coordinates need to be translated into a coordinate system relative to the object.
Different approaches can be taken for dealing with coordinate systems:
We tried to find a solution that (1) is compatible to existing components, (2) can be optionally loaded, but is not required for the whole system, and (3) is as simple and flexible as possible. The result is a combination of the alternatives mentioned above:
A recent addition (Summer 2014) that has not yet been integrated into all part of KnowRob offers a tf-like interface, but operates on logged transformation data stored in a MongoDB database instead of the runtime data on the /tf topic. Please have a look at the package knowrob_mongo for further information.
The code for transforming coordinates is contained in the package knowrob_objects in the file knowrob_coordinates.pl. Prolog-bindings to the tf library are provided by the package tf_prolog.
'Global coordinate' so far means 'coordinate in the robot's environment map'. There is currently no system for linking these maps to a world-global coordinate system like the WGS84 system used for GPS localization. There are different options how this could be done, while one has to keep in mind that they should be usable by an autonomous robot without human intervention: