If an autonomous robot has to robustly act in a dynamic real world environment, it has to be able to autonomously cope with unexpected, unforeseen or ambiguous situations.
A common reason for such situations is that the current state of the world is inconsistent with the internal belief or knowledge base of the robot. For instance the robot believes that it is in a different office as it is in reality. Usually this is caused by uncertainties in the robots acting and sensing or by exogenous events the robot is not able to perceive or to control. If a robot is not aware of such situations it is doomed to fail in fulfill its task because the decision making of the robot relies on a consistent belief.
Due to its reasoning capabilities humans are very good in handling such phenomena. They use common sense reasoning to detect such inconsistencies. Moreover, they are able to perform actions in order to reduce inconsistencies. For instance if a person does not exactly know in which floor of a building it may go back to the elevator or stair case and look for the right floor.
In the project we propose a reasoning approach which allows a robot to detect inconsistencies in its belief (abstract knowledge base) and to derive repair actions which remove or at least reduce inconsistencies in its belief. The approach uses a background model (common sense knowledge) about how the robot and its environment should work and methods of model-based diagnosis to detect inconsistencies in the belief and to locate the root cause for the inconsistency, e.g., facts which are wrong or uncertain. Furthermore, the approach automatically generates repair plans the robot is able to perform in order to reduce the inconsistency by confirming or deleting facts from the knowledge base.