TY - GEN
T1 - Learning Finite State Models fromRecurrent Neural Networks
AU - Muškardin, Edi
AU - Aichernig, Bernhard K.
AU - Pill, Ingo
AU - Tappler, Martin
N1 - Funding Information:
Acknowledgments. This work has been supported by the “University SAL Labs” initiative of Silicon Austria Labs (SAL) and its Austrian partner universities for applied fundamental research for electronic based systems.
Publisher Copyright:
© 2022, Springer Nature Switzerland AG.
PY - 2022
Y1 - 2022
N2 - Explaining and verifying the behavior of recurrent neural networks (RNNs) is an important step towards achieving confidence in machine learning. The extraction of finite state models, like deterministic automata, has been shown to be a promising concept for analyzing RNNs. In this paper, we apply a black-box approach based on active automata learning combined with model-guided conformance testing to learn finite state machines (FSMs) from RNNs. The technique efficiently infers a formal model of an RNN classifier’s input-output behavior, regardless of its inner structure. In several experiments, we compare this approach to other state-of-the-art FSM extraction methods. By detecting imprecise generalizations in RNNs that other techniques miss, model-guided conformance testing learns FSMs that more accurately model the RNNs under examination. We demonstrate this by identifying counterexamples with this testing approach that falsifies wrong hypothesis models learned by other techniques. This entails that testing guided by learned automata can be a useful method for finding adversarial inputs, that is, inputs incorrectly classified due to improper generalization.
AB - Explaining and verifying the behavior of recurrent neural networks (RNNs) is an important step towards achieving confidence in machine learning. The extraction of finite state models, like deterministic automata, has been shown to be a promising concept for analyzing RNNs. In this paper, we apply a black-box approach based on active automata learning combined with model-guided conformance testing to learn finite state machines (FSMs) from RNNs. The technique efficiently infers a formal model of an RNN classifier’s input-output behavior, regardless of its inner structure. In several experiments, we compare this approach to other state-of-the-art FSM extraction methods. By detecting imprecise generalizations in RNNs that other techniques miss, model-guided conformance testing learns FSMs that more accurately model the RNNs under examination. We demonstrate this by identifying counterexamples with this testing approach that falsifies wrong hypothesis models learned by other techniques. This entails that testing guided by learned automata can be a useful method for finding adversarial inputs, that is, inputs incorrectly classified due to improper generalization.
KW - Active automata learning
KW - Finite state machines
KW - Recurrent neural networks
KW - Verifiable machine learning
UR - http://www.scopus.com/inward/record.url?scp=85131914844&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-07727-2_13
DO - 10.1007/978-3-031-07727-2_13
M3 - Conference paper
AN - SCOPUS:85131914844
SN - 9783031077265
T3 - Lecture Notes in Computer Science
SP - 229
EP - 248
BT - Integrated Formal Methods - 17th International Conference, IFM 2022, Proceedings
A2 - ter Beek, Maurice H.
A2 - Monahan, Rosemary
PB - Springer Science and Business Media Deutschland GmbH
CY - Cham
T2 - 17th International Conference on Integrated Formal Methods
Y2 - 7 June 2022 through 10 June 2022
ER -