Art der Veröffentlichung: |
Artikel in Konferenzband |
Autor: |
J. Zhang, K. Huebner, A. Knoll |
Titel: |
Learning based Situation Recognition by Sectoring Omnidirectional Images for Robot Localisation |
Buch / Sammlungs-Titel: |
Proceedings of the IEEE Workshop on Omnidirectional Vision |
Erscheinungsjahr: |
2001 |
Abstract / Kurzbeschreibung: |
We have developed omnidirectional vision systems by combining digital colour video cameras with conical and hyperbolic mirrors and applied it in mobile robots in indoor environments. A learning based approach is introduced for localising mobile robot mainly based on the vision data without relying on landmarks. In an off-line learning step the system is trained on the compressed input data so as to classify different situations and to associate appropriate behaviours to these situations. At run time the compressed input data are used to determine the correspondence between the actual situation and the situation they were trained for. The matching controller may then directly realise the desired behaviour. The algorithms are straightforward to implement and the computational effort is much lower than with conventional vision systems. Preliminary experimental results validate the approach. |
PDF Version: |
http://www.informatik.uni-bremen.de/~khuebner/publications/ZhangHuebnerKnoll01.pdf |
Status: |
Reviewed |
Letzte Aktualisierung: |
09. 09. 2005 |