You were redirected here from coordinate_systems.

Coordinate system representation

Robots continuously have to deal with coordinate systems, and while specialized systems exist for managing coordinate transformations, there are several use cases in which also the knowledge base should be aware of the coordinate systems used for describing spatial information

Use cases

Transform robot-centric information into map coordinates

When robots detect an object and estimate its location, they usually do this in robot-centric coordinates, i.e. relative to their camera or their base frame. Converting between different robot-intrinsic coordinate frames is supported by e.g. the tf library. The problem is that the transformations in the tf system need to be continuously re-published and are only valid for a few seconds.

Integrate different maps

Today's robots can have different maps of the same environment: Often, a 2D laser-scanner map is used for self-localization, while three-dimensional maps semantically describe the objects in the environment. All of them have potentially different coordinate systems with different origins. There are different options how they can be integrated:

  • transform all coordinates into a 'global' frame when initially loading the map
  • perform the transformation on-demand when answering a query

Instantiate an object model with object-centric coordinates

When describing the spatial configuration of an object type, for example the positions of a hinge and a handle of a cupboard, on the level of an object class, these poses need to be relative to the object's origin. Once this object is detected, the system creates an instance of it and transforms these relative coordinates into global coordinates in the map.

Export object-centric coordinates

The opposite case is also possible, especially in the RoboEarth context: The robot has detected the different parts of an object, estimated its articulation properties, and would like to export this information to make it available to other robots. In order to be able to use this model in a different environment, the environment-specific global coordinates need to be translated into a coordinate system relative to the object.

Alternatives

Different approaches can be taken for dealing with coordinate systems:

  1. All coordinates in the same global coordinate system
    • All coordinates in the map can easily be compared
    • Converting all poses during query time (option 2) could be computationally expensive
    • Comparisons can be made without reasoning about the time at
  2. Coordinates in several local coordinate systems
    • Dependencies between coordinates become more explicit, which is especially useful for articulated objects: When the door of a cupboard is opened, the global pose of the handle is changed, while its local pose relative to the cupboard remains the same.
    • One needs to distinguish between static relative transformations (cupboard door – handle) and dynamic ones (robot base – object pose); the latter become invalid if the reference object has moved.
  3. static transformations between the systems
    • Rather simple:
    • Problems with changing transformations when objects are moved
  4. continuously updated transformations (e.g. [http://ros.org/wiki/tf tf] library)
    • Flexible: relative coordinates get updated if reference object has moved
    • Costly: Also static transformations need to be continuously re-published
    • Storing all transformations over an extended period of time (hours, days) is infeasible (but would be needed to support time-traveling and reasoning over past world states)

Approach chosen in KnowRob

We tried to find a solution that (1) is compatible to existing components, (2) can be optionally loaded, but is not required for the whole system, and (3) is as simple and flexible as possible. The result is a combination of the alternatives mentioned above:

  • By default, all coordinates are in a global coordinate system, especially all coordinates of object instances in the environment. This allows to use existing methods for spatial reasoning and visualization, and makes simple queries possible.
  • Poses can be specified relative to other poses or objects, and can also be qualified with their tf frame ID. This is mainly used when importing or exporting information and considered a rather temporary representation. There are methods to transform coordinates with respect to a reference object or into a tf frame.

A recent addition (Summer 2014) that has not yet been integrated into all part of KnowRob offers a tf-like interface, but operates on logged transformation data stored in a MongoDB database instead of the runtime data on the /tf topic. Please have a look at the package knowrob_mongo for further information.

Implementation

The code for transforming coordinates is contained in the package knowrob_objects in the file knowrob_coordinates.pl. Prolog-bindings to the tf library are provided by the package tf_prolog.

  • instantiate_at_position(+ObjClassDef, +PoseList, -ObjInst): Reads all parts of the object described at the class level (ObjClassDef) and instantiates the object such that the main object is at pose PoseList and all physical parts of that object are at the correct poses relative to PoseList. Relies on the physical parts being specified using a poseRelativeTo description or having a tfFrame given
  • pose_into_global_coord(+RelativePose, ?ReferencePose, -GlobalPose): Transform from relative pose to a global position (into the reference object's parent coordinate frame)
    • ReferencePose4d bound: transform into parent coordinate system of ReferencePose4d
    • ReferencePose4d unbound, but poseRelativeTo set: use the specified relative pose as reference
    • ReferencePose4d unbound, but tfFrame set: use tf to convert the coordinates into the /map frame
  • pose_into_relative_coord(+GlobalPose, +ReferencePose, -RelativePose): Transform from a global pose to a relative position (from the reference object's parent coordinate frame into coordinates relative to the reference object)

Limitations/Extensions

Truly global coordinates

'Global coordinate' so far means 'coordinate in the robot's environment map'. There is currently no system for linking these maps to a world-global coordinate system like the WGS84 system used for GPS localization. There are different options how this could be done, while one has to keep in mind that they should be usable by an autonomous robot without human intervention:

  • Specify e.g. WGS84 coordinates: This would be the easiest to use, but requires the robot to have access to global positioning data (e.g. GPS), which is often not available in indoor environments, or to have a human manually specify these coordinates.
  • Qualitative location description: A hierarchy of town, street, house number, floor number, room number could be used to find appropriate maps.