We consider vision-based navigation of a mobile agent (e.g. an autonomous robot or a wearable device) within an unknown, possibly wide environment. A "wandering" scenario is assumed, in which the agent often returns to places, which have been visited before, so that concurrent localization and mapping is required. Although the focus of the proposal is on computer vision, we are also interested in robust fusion with inertial and other sensors. Due to huge computational complexity, real time applications of vision based structure and motion estimation have not been proposed until recently. The few successful approaches are capable of mapping (i.e. perceiving to be at a previously visited location) only in very limited indoor environments or not capable at all. In order to alleviate real time requirements, we propose an adaptive vertical perception architecture, in which the basic behaviours (relative localization) are assigned to lower layers, while more complex procedures for absolute localization are placed higher. Such architecture allows for graceful performance degradation, since higher-level behaviours can be suppressed when very fast camera motion is perceived and video frame rate is required. On the other hand, when the camera is moving slowly, the expensive procedures for estimating and mapping the structure of the scene can be triggered, allowing for reliable repeatable localizations without compromising the basic functionality. At the top level, we are interested in an extended setup with several mobile agents and a common remote host providing improved mapping and localization services. The task of the service host would be to fuse higher-level landmark measurements received from mobile agents into a coherent overall map of the environment. Benefits o f such overall organization would include improving individual performance by allowing agents to build on experiences of their peers, as well as establishing a foundation for coordinated activities.
|Tatsächlicher Beginn/ -es Ende||1/09/06 → 31/08/07|