Decomposing CAD Models into their Functional Parts

Today’s robots are still lacking comprehensive knowledge bases about objects and their properties. Yet, a lot of knowledge is required when performing manipulation tasks to identify abstract concepts like a “handle” or the “blade of a spatula” and to ground them into concrete coordinate frames that can be used to parametrize the robot’s actions.

In a recent paper, we presented a system that enables robots to use CAD models of objects as a knowledge source and to perform logical inference about object components that have automatically been identified in these models. The system includes several algorithms for mesh segmentation and geometric primitive fitting which are integrated into the robot’s knowledge base as procedural attachments to the semantic representation. Bottom-up segmentation methods are complemented by top-down, knowledge-based analysis of the identified components. The evaluation on a diverse set of object models, downloaded from the Internet, shows that the algorithms are able to reliably detect several kinds of object parts.

2014/01/12 18:09 · admin

Aligning Specifications of Everyday Manipulation Tasks

Recently, there has been growing interest in enabling robots to use task instructions from the Internet and to share tasks they have learned with each other. To competently use, select and combine such instructions, robots need to be able to find out if different instructions describe the same task, which parts of them are similar and which ones differ.

In a recent paper, we have investigated techniques for automatically aligning symbolic task descriptions. We propose to adapt and extend established algorithms for sequence alignment that are commonly used in bioinformatics in order to make them applicable to robot action specifications. The extensions include methods for the comparison of complex sequence elements, for taking the semantic similarity of actions into account, and for aligning descriptions at different levels of granularity. We evaluate the algorithm on two large datasets of observations of human everyday tasks and show that they are able to align action sequences performed by different subjects in very different ways.

The code for this work is available online from the KnowRob account at GitHub.


2014/01/12 17:55 · admin

KnowRob has moved to GitHub

The KnowRob code has moved to its own repository at GitHub as part of an effort to make the development more open to the community. Several packages that have previously been part of the tum-ros-pkg repository or other private repositories have been made available in the knowrob_addons and knowrob_human repositories.

Migration

If you use the KnowRob binary packages, you do not have to change anything. If you use the source installation, check out the new repository using

git clone git@github.com:knowrob/knowrob.git

Mailing list

If you would like to stay up to date with recent developments of KnowRob, please subscribe to the knowrob-users mailing list that is used for important announcements around the KnowRob system.

2013/06/24 13:50 · admin

KnowRob article in IJRR

An extensive article about the KnowRob system and the design decisions that lead to the current architecture has been published in the International Journal of Robotics Research (IJRR). The article is currently the most coherent and up-to-date description of the system and the concepts behind KnowRob.

2013/06/10 15:38 · admin

New editor for task specifications

We have created a graphical editor for task specifications (called “action recipes” in the RoboEarth project). The manual creation of these specifications can be tedious task and requires rather deep knowledge of the OWL language. It is also prone to errors like incorrect transition specifications or wrong action arguments. The graphical editor is intended to speed up the creation and update of action recipes. It further serves as compact visualization of an action recipe and as supervision interface during task execution.

The following image gives an overview of its interface. In the top row, there are three groups of buttons for loading recipes from RoboEarth, for saving them back, and for starting the execution on the UNR Platform. The bottom left area visualizes the task specification, i.e. the single actions, their respective properties, and the transitions between them (whose type is visualized by different colors – green for 'OK', red for 'ERROR'). The two groups of form elements on the right-hand side describe properties of the recipe as a whole. The base IRI defines the name space of the OWL elements that are part of the recipe; the default value can be kept here. The other forms are described below.

2012/11/30 05:10 · tenorth

<< Newer entries | Older entries >>