Adaptive Shielding under Uncertainty

Research output: Working paperPreprint

Abstract

This paper targets control problems that exhibit specific safety and performance requirements. In particular, the aim is to ensure that an agent, operating under uncertainty, will at runtime strictly adhere to such requirements. Previous works create so-called shields that correct an existing controller for the agent if it is about to take unbearable safety risks. However, so far, shields do not consider that an environment may not be fully known in advance and may evolve for complex control and learning tasks. We propose a new method for the efficient computation of a shield that is adaptive to a changing environment. In particular, we base our method on problems that are sufficiently captured by potentially infinite Markov decision processes (MDP) and quantitative specifications such as mean payoff objectives. The shield is independent of the controller, which may, for instance, take the form of a high-performing reinforcement learning agent. At runtime, our method builds an internal abstract representation of the MDP and constantly adapts this abstraction and the shield based on observations from the environment. We showcase the applicability of our method via an urban traffic control problem.
Original languageEnglish
Number of pages8
Publication statusPublished - 8 Oct 2020

Publication series

NamearXiv.org e-Print archive
PublisherCornell University Library

Keywords

  • cs.LO

Fingerprint

Dive into the research topics of 'Adaptive Shielding under Uncertainty'. Together they form a unique fingerprint.

Cite this