Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
doc:modeling_tasks_and_actions [2014/08/05 08:39] – [Putting it all together] admindoc:modeling_tasks_and_actions [2014/08/05 08:40] (current) – [Putting it all together] admin
Line 164: Line 164:
 These instances can, for example, be generated by an action recognition system that interacts with the knowledge base and populates the set of action instances based on observations of humans. Based on these observations, the system can set parameters like the startTime, the endTime, the objectActedOn, or the bodyPartUsed. These instances can, for example, be generated by an action recognition system that interacts with the knowledge base and populates the set of action instances based on observations of humans. Based on these observations, the system can set parameters like the startTime, the endTime, the objectActedOn, or the bodyPartUsed.
  
-For more information, see [[http://ias.in.tum.de/publications/pdf/tenorth09dataset.pdf|tenorth09dataset]] or [[http://ias.in.tum.de/publications/pdf/beetz09qlts.pdf|beetz09qlts]].+For more information, see [[http://knowrob.org/_media/bib/tenorth09dataset.pdf|tenorth09dataset]] or [[http://knowrob.org/_media/bib/beetz10ameva.pdf|beetz10ameva]].
  
 ===== Putting it all together ===== ===== Putting it all together =====
Line 170: Line 170:
 So why all this effort? So why all this effort?
  
-  * Action recognition: If you have observed a sequence of actions, the objects and the locations they are put to, you can automatically (i.e. without writing any code for this, just using a DL reasoner) determine if these actions are a valid instance of a given task specification. The qualitative location descriptions are generated on-the-fly using [[Tutorial:_Computables| computables]], and if everything matches the description, you will get a positive result.+  * Action recognition: If you have observed a sequence of actions, the objects and the locations they are put to, you can automatically (i.e. without writing any code for this, just using a DL reasoner) determine if these actions are a valid instance of a given task specification. The qualitative location descriptions are generated on-the-fly using [[doc/reasoning_using_computables| computables]], and if everything matches the description, you will get a positive result.
  
   * Action verification: If the action does not 100% fit, you can check all sub-events and see which one differs.   * Action verification: If the action does not 100% fit, you can check all sub-events and see which one differs.