Modeling and generating constraint-based movement descriptions

In many cases, the success of a manipulation action performed by a robot is determined by how it is executed and by how the robot moves during the action. Examples are tasks such as unscrewing a bolt, pouring liquids and flipping a pancake. This aspect is often abstracted away in AI planning and action languages that assume that an action is successful as long as all preconditions are fulfilled. In a paper that will be presented at the European Conference on Artificial Intelligence, we investigate how constraint-based motion representations used in robot control can be combined with a semantic knowledge base in order to let a robot reason about movements and to automatically generate executable motion descriptions that can be adapted to different robots, objects and tools.

The system uses KnowRob as knowledge base for representing and reasoning about the motion descriptions and for analyzing geometric object models. The execution components have been implemented as part of the CRAM robot control framework.