The Next Frontier: AI We Can Really Trust

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review

Abstract

Enormous advances in the domain of statistical machine learning, the availability of large amounts of training data, and increasing computing power have made Artificial Intelligence (AI) very successful. For certain tasks, algorithms can even achieve performance beyond the human level. Unfortunately, the most powerful methods suffer from the fact that it is difficult to explain why a certain result was achieved on the one hand, and that they lack robustness on the other. Our most powerful machine learning models are very sensitive to even small changes. Perturbations in the input data can have a dramatic impact on the output and lead to entirely different results. This is of great importance in virtually all critical domains where we suffer from low data quality, i.e. we do not have the expected i.i.d. data. Therefore, the use of AI in domains that impact human life (agriculture, climate, health, ...) has led to an increased demand for trustworthy AI. Explainability is now even mandatory due to regulatory requirements in sensitive domains such as medicine, which requires traceability, transparency and interpretability capabilities. One possible step to make AI more robust is to combine statistical learning with knowledge representations. For certain tasks, it can be advantageous to use a human in the loop. A human expert can - sometimes, of course not always - bring experience, domain knowledge and conceptual understanding to the AI pipeline. Such approaches are not only a solution from a legal point of view, but in many application areas the “why” is often more important than a pure classification result. Consequently, both explainability and robustness can promote reliability and trust and ensure that humans remain in control, thus complementing human intelligence with artificial intelligence
Original languageEnglish
Title of host publicationMachine Learning and Principles and Practice of Knowledge Discovery in Databases - International Workshops of ECML PKDD 2021, Proceedings
Subtitle of host publicationECML PKDD 2021
EditorsMichael Kamp, Irena Koprinska, Adrien Bibal, Tassadit Bouadi, Benoît Frénay, Luis Galárraga, José Oramas, Linara Adilova, Yamuna Krishnamurthy, Bo Kang, Christine Largeron, Jefrey Lijffijt, Tiphaine Viard, Pascal Welke, Massimiliano Ruocco, Erlend Aune, Claudio Gallicchio, Gregor Schiele, Franz Pernkopf
Place of PublicationCham
PublisherSpringer
Pages427-440
Number of pages14
ISBN (Electronic)978-3-030-93736-2
ISBN (Print)978-3-030-93735-5
DOIs
Publication statusPublished - 2021
EventJoint European Conference on Machine Learning and Knowledge Discovery in Databases: ECML PKDD 2021 - Bilbao, Spain
Duration: 13 Sept 202117 Sept 2021
https://2021.ecmlpkdd.org/

Publication series

NameCommunications in Computer and Information Science
Volume1524

Conference

ConferenceJoint European Conference on Machine Learning and Knowledge Discovery in Databases
Abbreviated titleECML PKDD 2021
Country/TerritorySpain
CityBilbao
Period13/09/2117/09/21
Internet address

Keywords

  • Trust
  • trusted AI
  • trustworthy AI
  • Robustness
  • Human-in-the-loop
  • Trustworthy Artificial Intelligence
  • Explainable AI
  • Artificial intelligence

ASJC Scopus subject areas

  • Artificial Intelligence
  • Mathematics(all)
  • Computer Science(all)

Fields of Expertise

  • Information, Communication & Computing

Treatment code (Nähere Zuordnung)

  • Basic - Fundamental (Grundlagenforschung)

Fingerprint

Dive into the research topics of 'The Next Frontier: AI We Can Really Trust'. Together they form a unique fingerprint.

Cite this