Title: Object modelling for cognitive robotics
Language: English
Authors: Mörwald, Thomas 
Qualification level: Doctoral
Advisor: Vincze, Markus
Assisting Advisor: Leonardis, Ale 
Issue Date: 2013
Number of Pages: 100
Qualification level: Doctoral
The development of robots received great attention in the last decades. Progressing from hard-coded pick and place operations common for industrial applications, the need for more intelligent solutions emerged. The field of cognitive robotics evolved, where tasks are no more hard-coded processes that are executed monotonously. The concept of intelligent robots, known form science fiction, found its way into science, where soon methods appeared that allow for reasoning, representing knowledge, interacting with humans and so forth. In this thesis the focus lies on cognitive perception, that allows to learn and reason about objects as the robot perceives them. The importance of never-ending learning methods, the ability to handle partial information and fusion of knowledge from different cues for a better understanding of the appearance and properties of objects is demonstrated.
The appearance of objects is given by their shape and colour. Geometric models, such as B-spline curves and surfaces are used to segment range images and simultaneously reconstruct the shape of smooth, continuous surface patches. These patches are grouped to objects according to relations inspired by Gestalt principles. Colour information is mapped onto the shape, resulting in a model for the appearance of the object for a single view.
The key for learning and reasoning is to identify objects that have already been learned and to assign new information to them. In the context of perception that is, to visually track it and to self-evaluate observations to distinguish good and bad sensor data (e.g. sensor noise, occlusions, reflections, and so forth). For visual tracking the previously reconstructed appearance of the object is used. With respect to a prior pose, a Monte Carlo Particle Filter (MCPF) evaluates various pose hypothesis, efficiently following the object movement, including rigid 3D translations and rotations. A novel algorithm for evaluation of the tracking state, called tracking-state-detection (TSD), is proposed which allows to reason about the tracking quality, detects whether tracking is valid or lost or if the object is occluded.
The TSD allows to identify new good views, from which new information can be used to extend existing appearance models, but also to learn about the physical behaviour of objects. The trajectory of an object under robotic manipulation is observed, where a robotic finger is pushing it. This allows to learn or extend a probabilistic motion model.
That is, to assign probabilities between coordinate frames attached to the object, the robotic finger and the environment. The advantage of a probabilistic physical model is, in contrast to Newtonian mechanics, that it allows for generalisation to new object shapes and pushing configurations. This makes it perfectly suitable for cognitive robotics.
Furthermore physical predictions can be used as prior poses for tracking, increasing accuracy and robustness especially in difficult situations (e.g. motion blur during fast movement, partial or full occlusion).
Keywords: Computer Vision; Objektrekonstruierung; Objektverfolgung; Bewegungsprediktion; Oberflächenanpassung; Kurvenanpassung; B-splines; Monte Carlo Partikel Filter; Objekt Segmentierung; Maschinelles Sehen in der Robotik
computer vision; object reconstruction; visual tracking; motion prediction; surface fitting; curve fitting; B-splines; Monte Carlo particle filtering; object segmentation; robot vision
URI: https://resolver.obvsg.at/urn:nbn:at:at-ubtuw:1-57712
Library ID: AC07815493
Organisation: E376 - Institut für Automatisierungs- und Regelungstechnik 
Publication Type: Thesis
Appears in Collections:Thesis

Files in this item:

Show full item record

Page view(s)

checked on May 28, 2021


checked on May 28, 2021

Google ScholarTM


Items in reposiTUm are protected by copyright, with all rights reserved, unless otherwise indicated.