This is an old revision of the document!


Adding a dataset to data.open-ease.org


This document is meant for internal use. If you are not part of the developer team, these instructions may not work for you.


Components of a data set

A data set may consist of several different kinds of information and/or inference methods that all have to be added to the system. The following are the most common components for which we will describe below how to include them:

  • KnowRob package with Prolog predicates for reasoning
  • OWL file with instances of actions and events (and possibly other information)
  • Images and other files
  • MongoDB with recorded tf data
  • Query library with prepared demo queries

Required steps for adding your own data to the system

The recommended procedure is to first test your data and to develop your queries locally, without using the docker virtualization environment. Once things work there, you can add your information to the right repositories and rebuild the docker containers so that your data is available in the system.

Develop and test locally (without docker)

It is recommended to first develop and test your data on your local system using a normal KnowRob installation. In this setup, it is much easier to debug problems, and once things work here, you can easily transfer them to the 'containerized' version. You can start a local KnowRob system with the same setup as the public web interface by

roslaunch knowrob_roslog_launch knowrob.launch

This will start a webserver at http://localhost:1111 through which you can send queries. Note that most of the demo queries will not work unless you have the required data in your local MongoDB database. You can also start an interactive Prolog shell using the same package setup, but in order to see the visualizations in the browser, you also have to start the rosbridge server that bridges between the ROS world and the Web world:

roslaunch rosbridge_server rosbridge_websocket.launch
rosrun rosprolog rosprolog knowrob_roslog_launch
visualization_canvas.

This will give you the identical terminal as the Prolog shell in the public Web interface, but you can use tools such as the guitracer for graphical debugging.

Add your own KnowRob package

If you are working on a totally new kind of data set, chances are that you would like to add your own KnowRob package with reasoning predicates specific for your data. Again, this is best to be developed locally first. This page describes the necessary steps for creating a KnowRob package. Once you have finished it, please add it to the knowrob_addons repository at GitHub – either directly, if you have commit rights, or by sending a pull request. Every time a ROS package has been added, the KnowRob (knowrob/hydro-knowrob-daemon) container has to be rebuilt.

Required actions:

Add your KnowRob package to your fork of knowrob_addons and send a pull request.

Add OWL files and images

The OWL files with the logged actions and plan events are the central entry point for the reasoning procedures. They can automatically be generated using the logging infrastructure described here. While the ROS packages only have to be modified when a completely new kind of data set is to be integrated, OWL files are added more frequently whenever a new experiment run has been performed.

These OWL files are stored in the knowrob_data GitHub repository. Whenever new data is pushed to this repository, a rebuild of the knowrob_data container is automatically triggered. In the same folder structure, we also store files such as images the robot has recorded. Please try to keep this repository in a reasonable size and, for instance, shrink images and store them as JPEG (which is smaller than PNG for photos).

Required actions:

Add your OWL files and images to a new subfolder in your fork of knowrob_data and send a pull request.

Add MongoDB data

Currently, the MongoDB data is stored in a database that is shared among all experiments. This means we have to log into the server and import your data into this database.

Required actions:

Call the mongoimport program on the computer where the database is running (e.g. your laptop or the data.open-ease.org server). The example below is for tf data, please adapt to any other tables you may want to import:

mongoimport --db roslog --collection tf your-tf.json

Adding a query library

The entries in the query library, that is commonly in the lower left corner of the interface, are generated from a JSON file of the following format. Each entry in the 'query' list corresponds to one row in the generated query library.

{
    "query": [
        {
            "q": "",
            "text": "----- This is a title -----"
        },
        {
            "q": "member(A, [a,b,c]).",
            "text": "This is a demo entry"
        }
    ]
}

These query library files define the entry point into your dataset, including which files and which KnowRob packages are to be loaded. They should be executable starting with a freshly launched knowrob_roslog_launch launch file. This means that the first entries in this query list usually initialize the system in the way you need it, e.g. load packages, parse OWL files etc. Currently, these queries libraries are also used to select which experiment is to be loaded: The URL http://data.open-ease.org/exp/your-data will tell the system to load the query library from the file queries-your-data.json.

Required actions:

Add your query library to the 'static' folder in the web-app container in the knowrob/docker repository.

Rebuilding the containers

The previous steps have added the different components of your data set (OWL log files, Prolog code, MongoDB dumps) to the right places in the system. To include this information into the docker containers, you will need to re-create some of the containers the openEASE system is composed of. Which containers have to be re-built depends on which components you have added to your data set.

For re-building them, you will need a checkout of the knowrob/docker repository that contains a set of Dockerfiles. Similar to Makefiles, they describe the steps that need to be performed for creating the respective containers. All paths in this section assume that this repository has been checked out to ~/docker. These commands will retrieve code from the GitHub repositories, i.e. before building the containers, all changes have to be pushed and all pull requests have to be merged into the master branch.

# if you have added your own KnowRob package in knowrob_addons:
cd ~/docker/knowrob-daemon
docker build -t knowrob/hydro-knowrob-daemon .
 
# If you have added an OWL file in knowrob_data, the container 
# will automatically be built, so you can just pull it once the
# build is finished. Check the build details tab here:
# https://registry.hub.docker.com/u/knowrob/knowrob_data/
docker pull knowrob/knowrob_data
 
# if you have added your own query library:
cd ~/docker/webapp
docker build -t knowrob/webrob .

Accessing your new experiment

After rebuilding the containers, you

./start-webrob http://data.open-ease.org/exp/your-data