ArgNLE

The First Workshop on Natural Language Argument-Based Explanations

Co-located with ECAI 2024. October 20th 2024, Universidad de Santiago de Compostela.

Content & Topics

Explainability and Computational Argumentation have usually been approached as separate, independent research topics, which neglects many aspects arising from considering the interdependencies between them. To be effective for human users, explanations are required to be formulated in natural language, possibly in an argumentative fashion. A workshop on exploring Natural language Argument-based Explanations is proposed to investigate this challenging topic, at the crossroad of these different research fields.

Providing high quality explanations for AI predictions based on machine learning is a challenging and complex task. To work well it requires, among other factors: selecting a proper level of generality/specificity of the explanation; considering assumptions about the familiarity of the explanation beneficiary with the AI task under consideration; referring to specific elements that have contributed to the decision; making use of additional knowledge (e.g., metadata) which might not be part of the prediction process; selecting appropriate examples; providing evidence supporting negative hypothesis. Finally, the system needs to formulate the explanation in a clearly interpretable, and possibly convincing, way.

Given these considerations, the workshop welcome contributions showing an integrated vision of Explainable AI (XAI), where low level characteristics of the deep learning process are combined with higher level schemas proper of the human argumentation capacity. These integrated vision relies on three main considerations:

  • In neural architectures the correlation between internalstates of the network and the justification of the network classification outcome is not well studied
  • High quality explanations are crucially based on argumentation mechanisms (e.g., provide supporting examples and rejected alternatives)
  • In real settings, providing explanations is inherently an interactive process involving the system and the user. Accordingly, the workshop calls for cross-disciplinary contributions in three areas, i.e., deep learning, argumentation and interactivity, to support a broader and innovative view of explainable AI

More precisely, the workshop is intended to discuss research challenges that will allow to advance the state of the art in explainable AI. Providing explanations to support a certain conclusion has been largely studied in logic, as a fundamental characteristic of human reasoning. As a result, both theoretical and computational models of human argumentation are investigated. The recent resurgence of AI highlighted the idea that low level system behaviors not only need to be interpretable (e.g., showing those elements that most contributed to the system decision), but also need to fit high level human schemas to produce convincing arguments.

Suggested topics of the workshop include but are not limited to:

  • Natural language argument-based explanations
  • Neuro-symbolic explainable argumentation
  • Dialectical, dialogical and conversational explanations
  • AI methods to support argumentative explainability
  • User-acceptance and evaluation of argumentation-based explanations
  • Tools that provide argumentation-based explanations
  • Use of argument-based explanations for research from the social sciences, digital humanities, and related fields
  • Real-world applications, including argument-based explanations search, customer reviews, argument analysis in meetings, and applications in specific domains, such as education, law, and scientific writing

Invited speakers

    Francesca Toni
  • Prof. Francesca Toni - Faculty of Engineering, Department of Computing, Imperial College London, UK
    • Title: Argumentative Explanations for Veracity-Checking
    • Abstract: AI has become pervasive in recent years, and the need for explainability is widely agreed upon as crucial towards safe and trustworthy deployment of AI systems, especially given the plethora of opportunities for misinformation, hallucinations and malicious behaviour in data-driven AI. In this talk I will overview approaches based on computational argumentation for explaining veracity-checking in a number of incarnations, including for fact checking, for detecting scientific fraud, and for claim verification. I will advocate computational argumentation as ideally suited to support explainable veracity checking that can (1) interact to progressively explain outputs and/or reasoning as well as assess grounds for contestation provided by humans and/or other machines, and (2) revise decision-making processes to redress any issues successfully raised during contestation.
    • BIO: Francesca Toni is Professor in Computational Logic and Royal Academy of Engineering/JP Morgan Research Chair on Argumentation-based Interactive Explainable AI (XAI) at the Department of Computing, Imperial College London, UK, as well as the founder and leader of the CLArg (Computational Logic and Argumentation) research group and of the Faculty of Engineering XAI Research Centre. She holds an ERC Advanced grant on Argumentation-based Deep Interactive eXplanations (ADIX). Her research interests lie within the broad area of Explainable AI, at the intersection of Knowledge Representation and Reasoning, Machine Learning, Computational Argumentation, Argument Mining, and Multi-Agent Systems. She is EurAI fellow, IJCAI Trustee, in the Board of Directors for KR Inc., member of the editorial board for the Argument and Computation journal, Editorial Advisor for Theory and Practice of Logic Programming, and associate editor for the AI journal.

Schedule

  • 14:00-14:05 Welcome
  • 14:05-15:00 Invited talk of Francesca Toni
  • 15:00-15:30 Felix Liedeker, Olivia Sanchez-Graillet, Philipp Cimiano, Jörg Wellmer, Moana Seidler and Christian Brandt. A User Study Evaluating Argumentative Explanations in Diagnostic Decision Support
  • 15:30-16:15 Coffee break with posters on the CHIST-ERA ANTIDOTE project
  • 16:15-16:45 Chris Reed. Argument, Explanation and Inference: A position paper comprising footnotes to Hahn & Tesic
  • 16:45-17:30 Discussion panel and closing remarks

Registration

Participants can register through this link

Important dates

  • Submission deadline: EXTENDED to June 30th 2024
  • Notification of acceptance: July 19th 2024
  • Camera-ready papers: July 31st 2024
  • Workshop: October 20th 2024

Submission instructions

    Papers must be written in English, be prepared for double-blind review using the ECAI LaTeX template, and not exceed 7 pages (not including references). Papers should be submitted via EasyChair

Organizers

Committee

  • Aitziber Atutxa - HiTZ Center - Ixa, University of the Basque Country UPV/EHU, Spain
  • Maite Oronoz - HiTZ Center - Ixa, University of the Basque Country UPV/EHU, Spain
  • German Rigau - HiTZ Center - Ixa, University of the Basque Country UPV/EHU, Spain
  • Petar Bodlović - IFILNOVA, Universidade Nova de Lisboa, Portugal
  • Fabrizio Macagno - IFILNOVA, Universidade Nova de Lisboa, Portugal
  • Maria Grazia Rossi - IFILNOVA, Universidade Nova de Lisboa, Portugal
  • Victor David - Université Côte d'Azur, Inria, CNRS, 13S, France
  • Benjamin Molinet - Université Côte d’Azur, Inria, CNRS, I3S, France
  • Theo Alkibiades Collias - Université Côte d’Azur, Inria, CNRS, I3S, France
  • Alberto Lavelli - Fondazione Bruno Kessler, Italy
  • Andrea Zaninello - Fondazione Bruno Kessler, Italy
  • Kanimozhi Uma - KU Leuven, Belgium
  • Wei Sun - KU Leuven, Belgium