Rigorous testing of automated and autonomous systems is inevitable especially in case of safetycritical systems like cars or airplanes. There exist several functional safety standards that have to be fulfilled like IEC 61508 explicitly stating that AI methodologies are not recommended to be used in case of systems with higher safety requirements. Hence, there is a necessity to adopt these standards in a direction where AI methodology is allowed to be used providing to fulfill certain standardized quality assurance method to be taken care of during development. In this paper, we contribute to this endeavor and discuss the urgent need for system testing in the context of safety-critical systems comprising AI methodologies. In particular, we argue based on one example from the automotive industry that it is strongly recommended to consider not only subsystems but instead the whole system interacting with its environment when carrying out tests. The discussed example is an advanced driver-assistance systems used to break in case of an emergency that does not rely on machine learning but comprises a decision part that invokes breaking once the sensors identify an obstacle that might be hit otherwise. Results obtained from an already reported testing methodology, revealed that when using tests considering the environment of an automated emergency breaking systems, we obtain critical scenarios that might otherwise have not been detected. From this observation, we conclude that rigorous system testing becomes even more important for systems with AI methodology based on machine learning or allowing to adapt the system's behavior during operation.
|Journal||CEUR Workshop Proceedings|
|Publication status||Published - 2019|
|Event||2019 Workshop on Artificial Intelligence Safety, AISafety 2019 - Macao, China|
Duration: 11 Aug 2019 → 12 Aug 2019
ASJC Scopus subject areas
- Computer Science(all)