Run-Time Optimization for Learned Controllers Through Quantitative Games

Guy Avni, Roderick Bloem, Krishnendu Chatterjee, Thomas Henzinger, Bettina Könighofer, Stefan Pranger

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review

Abstract

A controller is a device that interacts with a plant. At each time point, it reads the plant’s state and issues commands with the goal that the plant operates optimally. Constructing optimal controllers is a fundamental and challenging problem. Machine learning techniques have recently been successfully applied to train controllers, yet they have limitations. Learned controllers are monolithic and hard to reason about. In particular, it is difficult to add features without retraining, to guarantee any level of performance, and to achieve acceptable performance when encountering untrained scenarios. These limitations can be addressed by deploying quantitative run-time shields that serve as a proxy for the controller. At each time point, the shield reads the command issued by the controller and may choose to alter it before passing it on to the plant. We show how optimal shields that interfere as little as possible while guaranteeing a desired level of controller performance, can be generated systematically and automatically using reactive synthesis. First, we abstract the plant by building a stochastic model. Second, we consider the learned controller to be a black box. Third, we measure controller performance and shield interference by two quantitative run-time measures that are formally defined using weighted automata. Then, the problem of constructing a shield that guarantees maximal performance with minimal interference is the problem of finding an optimal strategy in a stochastic 2-player game “controller versus shield” played on the abstract state space of the plant with a quantitative objective obtained from combining the performance and interference measures. We illustrate the effectiveness of our approach by automatically constructing lightweight shields for learned traffic-light controllers in various road networks. The shields we generate avoid liveness bugs, improve controller performance in untrained and changing traffic situations, and add features to learned controllers, such as giving priority to emergency vehicles .
Original languageEnglish
Title of host publicationComputer Aided Verification (CAV)
EditorsI. Dillig, S. Tasiran
PublisherSpringer
Pages630-649
Volume11561
Edition31
ISBN (Print)978-3-030-25539-8
DOIs
Publication statusPublished - 15 Jul 2019
Event31st International Conference on Computer Aided Verification - The New School, New York City, United States
Duration: 15 Jul 201918 Jul 2019

Publication series

Name Lecture Notes in Computer Science
Volume11561

Conference

Conference31st International Conference on Computer Aided Verification
Abbreviated titleCAV 2019
Country/TerritoryUnited States
CityNew York City
Period15/07/1918/07/19

Fingerprint

Dive into the research topics of 'Run-Time Optimization for Learned Controllers Through Quantitative Games'. Together they form a unique fingerprint.

Cite this