Schörghuber, M., Wallner, M., Jung, R., Humenberger, M., & Gelautz, M. (2018). Vision-based Autonomous Feeding Robot. In P. M. Roth, M. Welk, & M. Urschler (Eds.), Proceedings of the OAGM Workshop 2018 Medical Image Analysis (pp. 111–115). Verlag der Technischen Universität Graz. https://doi.org/10.3217/978-3-85125-603-1-23
This paper tackles the problem of vision-based indoor navigation for robotic platforms. Contrary to methods using adaptions of the infrastructure (e.g. magnets, rails), vision-based methods try to use natural landmarks for localization. However, this imposes the challenge of robustly establishing correspondences between query images and the natural environment which can further be used for pose estimation. We propose a monocular and stereo VSLAM algorithm which is able to, first, generate a map of the target environment and, second, use this map to robustly localize a robot. Our hybrid VSLAM approach is able to utilize map points from the previously generated map to (i) increase robustness of its local mapping against challenging situations such as rapid movements, dominant rotations, motion blur or inappropriate exposure time, and to (ii) continuously assess the quality of the local map. We evaluated our approach in a real-world environment as well as using public benchmark datasets. The results
show that our hybrid approach improves the performance in comparison to VSLAM without an offline map.
en
Research Areas:
Visual Computing and Human-Centered Technology: 100%