On the use of available testing methods for verification & validation of AI-based software and systems

Franz Wotawa*

*Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

Abstract

Verification and validation of software and systems is the essential part of the development cycle in order to meet given quality criteria including functional and non-functional requirements. Testing and in particular its automation has been an active research area for decades providing many methods and tools for automating test case generation and execution. Due to the increasing use of AI in software and systems, the question arises whether it is possible to utilize available testing techniques in the context of AI-based systems. In this position paper, we elaborate on testing issues arising when using AI methods for systems, consider the case of different stages of AI, and start investigating on the usefulness of certain testing methods for testing AI. We focus especially on testing at the system level where we are interesting not only in assuring a system to be correctly implemented but also to meet given criteria like not contradicting moral rules, or being dependable. We state that some well-known testing techniques can still be applied providing being tailored to the specific needs.

Original languageEnglish
Number of pages6
JournalCEUR Workshop Proceedings
Volume2808
Publication statusPublished - 2021
Event2021 Workshop on Artificial Intelligence Safety, SafeAI 2021 - Virtual, Online
Duration: 8 Feb 2021 → …

ASJC Scopus subject areas

  • Computer Science(all)

Fingerprint

Dive into the research topics of 'On the use of available testing methods for verification & validation of AI-based software and systems'. Together they form a unique fingerprint.

Cite this