Strong Explanations in Abstract Argumentation

Markus Ulbricht, Johannes Peter Wallner

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review

Abstract

Abstract argumentation constitutes both a major research strand and a key approach that provides the core reasoning engine for a multitude of formalisms in computational argumentation in AI. Reasoning in abstract argumentation is carried out by viewing arguments and their relationships as abstract entities, with argumentation frameworks (AFs) being the most commonly used abstract formalism. Argumentation semantics then drive the reasoning by specifying formal criteria on which sets of arguments, called extensions, can be deemed as jointly acceptable. Such extensions provide a basic way of explaining argumentative acceptance. Inspired by recent research, we present a more general class of explanations: in this paper we propose and study so-called strong explanations for explaining argumentative acceptance in AFs. A strong explanation is a set of arguments such that a target set of arguments is acceptable in each subframework containing the explaining set. We formally show that strong explanations form a larger class than extensions, in particular giving the possibility of having smaller explanations. Moreover, assuming basic properties, we show that any explanation strategy, broadly construed, is a strong explanation. We show that the increase in variety of strong explanations comes with a computational trade-off: we provide an in-depth analysis of the associated complexity, showing a jump in the polynomial hierarchy compared to extensions.
Original languageEnglish
Title of host publicationProceedings AAAI
PublisherAAAI Press
Pages6496-6504
Volume35(7)
Publication statusPublished - 2021

Fingerprint

Dive into the research topics of 'Strong Explanations in Abstract Argumentation'. Together they form a unique fingerprint.

Cite this