Co-located with ECAI 2024. October 20th 2024, Universidad de Santiago de Compostela.
Explainability and Computational Argumentation have usually been approached as separate, independent research topics, which neglects many aspects arising from considering the interdependencies between them. To be effective for human users, explanations are required to be formulated in natural language, possibly in an argumentative fashion. A workshop on exploring Natural language Argument-based Explanations is proposed to investigate this challenging topic, at the crossroad of these different research fields.
Providing high quality explanations for AI predictions based on machine learning is a challenging and complex task. To work well it requires, among other factors: selecting a proper level of generality/specificity of the explanation; considering assumptions about the familiarity of the explanation beneficiary with the AI task under consideration; referring to specific elements that have contributed to the decision; making use of additional knowledge (e.g., metadata) which might not be part of the prediction process; selecting appropriate examples; providing evidence supporting negative hypothesis. Finally, the system needs to formulate the explanation in a clearly interpretable, and possibly convincing, way.
Given these considerations, the workshop welcome contributions showing an integrated vision of Explainable AI (XAI), where low level characteristics of the deep learning process are combined with higher level schemas proper of the human argumentation capacity. These integrated vision relies on three main considerations:
More precisely, the workshop is intended to discuss research challenges that will allow to advance the state of the art in explainable AI. Providing explanations to support a certain conclusion has been largely studied in logic, as a fundamental characteristic of human reasoning. As a result, both theoretical and computational models of human argumentation are investigated. The recent resurgence of AI highlighted the idea that low level system behaviors not only need to be interpretable (e.g., showing those elements that most contributed to the system decision), but also need to fit high level human schemas to produce convincing arguments.
Suggested topics of the workshop include but are not limited to:
Participants can register through this link
Papers must be written in English, be prepared for double-blind review using the ECAI LaTeX template, and not exceed 7 pages (not including references). Papers should be submitted via EasyChair
This work has been partially supported by the French government, through the 3IA Côte d'Azur Investments in the Future project managed by the National Research Agency (ANR) with the reference number ANR- 19-P3IA-0002. This work was also partially supported by the CHIST-ERA grant of the Call XAI 2019 of the ANR with the grant number Project-ANR-21-CHR4-0002.
We also acknowledge the support of the following MCIN/AEI/10.13039/501100011033 projects: