Search or filter publications

Filter by type:

Filter by publication type

Filter by year:

to

Results

  • Showing results for:
  • Reset all filters

Search results

  • Conference paper
    Rago A, Vasileiou SL, Tran S, Toni F, Yeoh Wet al., 2025,

    A methodology for incompleteness-tolerant and modular gradual semantics for argumentative statement graphs

    , 22nd International Conference on Principles of Knowledge Representation and Reasoning (KR 2025), Publisher: International Joint Conferences on Artificial Intelligence Organization

    Gradual semantics (GS) have demonstrated great potential in argumentation, in particular for deploying quantitative bipolar argumentation frameworks (QBAFs) in a number of real-world settings, from judgmental forecasting to explainable AI. In this paper, we provide a novel methodology for obtaining GS for statement graphs, a form of structured argumentation framework, where arguments and relations between them are built from logical statements. Our methodology differs from existing approaches in the literature in two main ways. First, it naturally accommodates incomplete information, so that arguments with partially specified premises can play ameaningful role in the evaluation. Second, it is modularlydefined to leverage on any GS for QBAFs. We also define aset of novel properties for our GS and study their suitabilityalongside a set of existing properties (adapted to our setting) for two instantiations of our GS, demonstrating their advantages over existing approaches.

  • Conference paper
    Rago A, Palfi B, Sukpanichnant P, Nabli H, Vivek K, Kostopoulou O, Kinross J, Toni Fet al., 2025,

    Exploring the effect of explanation content and format on user comprehension and trust in healthcare

    , The 14th Conference on Prestigious Applications of Intelligent Systems (PAIS 2025), Publisher: IOS Press

    AI-driven tools for healthcare are widely acknowledgedas potentially beneficial to health practitioners and patients, e.g. the QCancer regression tool for cancer risk prediction. However, for these tools to be trusted, they need to be supplemented with explanations. We examine how explanations’ content and format affect user comprehension and trust when explaining QCancer’s predictions. Regarding content, we deploy SHAP and Occlusion-1. Regarding format, we present SHAP explanations, conventionally, ascharts (SC) and Occlusion-1 explanations as charts (OC) as well as text (OT), to which their simpler nature lends itself. We conduct experiments with two sets of stakeholders: the general public (representing patients) and medical students (representing healthcare practitioners). Our experiments showed higher subjective comprehension and trust for Occlusion-1 over SHAP explanations based on content.However, when controlling for format, only OT outperformed SC, suggesting this trend is driven by preferences for text. Other findings corroborated that explanation format, rather than content, is often the critical factor.

  • Journal article
    Dickie C, Lauren S, Belardinelli F, Rago A, Toni Fet al., 2025,

    Aggregating bipolar opinions through bipolar assumption-based argumentation

    , Autonomous Agents and Multi-Agent Systems, Vol: 39, ISSN: 1387-2532

    We introduce a novel method to aggregate Bipolar ArgumentationFrameworks expressing opinions of different parties in debates. We use BipolarAssumption-based Argumentation (ABA) as an all-encompassing formalismfor Bipolar Argumentation under different semantics. By leveraging on recentresults on judgement aggregation in Social Choice Theory, we prove severalpreservation results for relevant properties of Bipolar ABA using quota andoligarchic rules. Specifically, we prove (positive and negative) results about thepreservation of conflict-free, closed, admissible, preferred, complete, set-stable,well-founded and ideal extensions in Bipolar ABA, as well as the preservationof acceptability, acyclicity and coherence for individual assumptions. Finally,we illustrate our methodology and results in the context of a case study onopinion aggregation for the treatment of long COVID patients.

  • Conference paper
    Freedman G, Toni F, 2025,

    Exploring the potential for large language models to demonstrate rational probabilistic beliefs

    , 38th International FLAIRS Conference, Publisher: LibraryPress@UF, ISSN: 2334-0762

    Advances in the general capabilities of large language models (LLMs) have led to their use for information retrieval, and as components in automated decision systems. A faithful representation of probabilistic reasoning in these models may be essential to ensure trustworthy, explainable and effective performance in these tasks. Despite previous work suggesting that LLMs can perform complex reasoning and well-calibrated uncertainty quantification, we find that current versions of this class of model lack the ability to provide rational and coherent representations of probabilistic beliefs. To demonstrate this, we introduce a novel dataset of claims with indeterminate truth values and apply a number of well-established techniques for uncertainty quantification to measure the ability of LLM's to adhere to fundamental properties of probabilistic reasoning.

  • Conference paper
    Alfano G, Gould A, Leofante F, Rago A, Toni Fet al., 2025,

    Counterfactual explanations under model multiplicity and their use in computational argumentation

    , International Joint Conference on Artificial Intelligence (IJCAI) 2025, Publisher: IJCAI

    Counterfactual explanations (CXs) are widely recognised as an essential technique for providing recourse recommendations for AI models. However, it is not obvious how to determine CXs in model multiplicity scenarios, where equally performing but different models can be obtained forthe same task. In this paper, we propose novel qualitative and quantitative definitions of CXs based on explicit, nested quantification over (groups) of model decisions. We also study properties of these notions and identify decision problems of interest therefor. While our CXs are broadly applicable, in this paper we instantiate them within computational argumentation where model multiplicity naturally emerges e.g. with incomplete and case-based argumentation frameworks. We then illustrate the suitability of our CXs for model multiplicity in legal and healthcare contexts, before analysing the complexity of the associated decision problems.

  • Conference paper
    Ayoobi H, Potyka N, Toni F, 2025,

    ProtoArgNet: Interpretable Image Classification with Super-Prototypes and Argumentation

    , AAAI Conference on Artificial Intelligence
  • Conference paper
    Freedman G, Dejl A, Gorur D, Yin X, Rago A, Toni Fet al., 2025,

    Argumentative large language models for explainable and contestable claim verification

    , AAAI Conference on Artificial Intelligence, Publisher: Association for the Advancement of Artificial Intelligence, Pages: 14930-14939, ISSN: 2159-5399

    The profusion of knowledge encoded in large language models (LLMs) and their ability to apply this knowledge zero-shot in a range of settings makes them promising candidates for use in decision-making. However, they are currently limited by their inability to provide outputs which can be faithfully explained and effectively contested to correct mistakes. In this paper, we attempt to reconcile these strengths and weaknesses by introducing argumentative LLMs (ArgLLMs), a method for augmenting LLMs with argumentative reasoning. Concretely, ArgLLMs construct argumentation frameworks, which then serve as the basis for formal reasoning in support of decision-making. The interpretable nature of these argumentation frameworks and formal reasoning means that any decision made by ArgLLMs may be explained and contested. We evaluate ArgLLMs’ performance experimentally in comparison with state-of-the-art techniques, in the context of the decision-making task of claim verification. We also define novel properties to characterise contestability and assess ArgLLMs formally in terms of these properties.

  • Conference paper
    Chen L, Dejl A, Toni F, 2025,

    Identifying Query-Relevant Neurons in Large Language Models for Long-Form Texts

    , The 39th Annual AAAI Conference on Artificial Intelligence
  • Conference paper
    Russo F, Toni F, 2025,

    Shapley-PC: constraint-based causal structure learning with a Shapley inspired framework

    , 4th Conference on Causal Learning and Reasoning (CLeaR 2025)

    Causal Structure Learning (CSL), also referred to as causal discovery, amounts to extracting causal relations among variables in data. CSL enables the estimation of causal effects from observational data alone, avoiding the need to perform real life experiments. Constraint-based CSL leverages conditional independence tests to perform causal discovery. We propose Shapley-PC, a novel method to improve constraint-based CSL algorithms by using Shapley values over the possible conditioning sets, to decide which variables are responsible for the observed conditional (in)dependences. We prove soundness, completeness and asymptotic consistency of Shapley-PC and run a simulationstudy showing that our proposed algorithm is superior to existing versions of PC.

  • Conference paper
    Kori A, Glocker B, Toni F, 2025,

    Explaining Image Classifiers with Visual Debates

    , 27th International Conference on Discovery Science, Publisher: SPRINGER INTERNATIONAL PUBLISHING AG, Pages: 200-214, ISSN: 2945-9133
  • Conference paper
    De Angelis E, Proietti M, Toni F, 2025,

    Greedy ABA Learning for Case-Based Reasoning

    , 24th International Conference on Autonomous Agents and Multiagent Systems-AAMAS-Annual, Publisher: ASSOC COMPUTING MACHINERY, Pages: 556-564
  • Conference paper
    Kori A, Rago A, Toni F, 2025,

    Free argumentative exchanges for explaining image classifiers

    , AAMAS 2025, Publisher: ACM

    Deep learning models are powerful image classifiers but their opacity hinders their trustworthiness. Explanation methods for capturing the reasoning process within these classifiers faithfully and in a cognitively manageable manner are scarce, due to their sheer complexity and size. In this paper, we provide a solution for this problem by defining a novel method for explaining the outputs of image classifiers with debates between two agents, each arguing for a particular class. We obtain these debates as concrete instances of Free Argumentative eXchanges (FAXs), a novel argumentation-based multi-agent framework allowing agents to internalise opinions by other agents differently than originally stated. We define two metrics to assess the usefulness of FAXs as argumentative explanationsfor image classifiers. We then conduct a number of empirical experiments showing that FAXs perform well along these metrics as well as being more faithful to the image classifiers than conventional, non-argumentative explanation methods.

  • Conference paper
    Gorur D, Rago A, Toni F, 2025,

    Can Large Language Models perform Relation-based Argument Mining?

    , The 31st International Conference on Computational Linguistics
  • Conference paper
    Rapberger A, Ulbricht M, Toni F, 2024,

    On the correspondence of non-flat assumption-based argumentation and logic programming with negation as failure in the head

    , 22nd International Workshop on Nonmonotonic Reasoning NMR 24), Publisher: CEUR Workshop Proceedings, Pages: 122-121, ISSN: 1613-0073

    The relation between (a fragment of) assumption-based argumentation (ABA) and logic programs (LPs) under stable model semantics is well-studied. However, for obtaining this relation, the ABA framework needs to be restricted to being flat, i.e., a fragment where the (defeasible) assumptions can never be entailed, only assumed to be true or false. Here, we remove this restriction and show a correspondence between non-flat ABA and LPs with negation as failure in their head. We then extend this result to so-called set-stable ABA semantics, originally defined for the fragment of non-flat ABA called bipolar ABA. We showcase how to define set-stable semantics for LPs with negation as failure in their head and show the correspondence to set-stable ABA semantics.

  • Conference paper
    Vasileiou S, Kumar A, Yeoh W, Son TC, Toni Fet al., 2024,

    Dialectical reconciliation via structured argumentative dialogues

    , KR 2024

    We present a novel framework designed to extend model reconciliation approaches, commonly used in human-aware planning, for enhanced human-AI interaction. By adopting a structured argumentation-based dialogue paradigm, our framework enables dialectical reconciliation to address knowledge discrepancies between an explainer (AI agent) and an explainee (human user), where the goal is for the explainee to understand the explainer's decision. We formally describe the operational semantics of our proposed framework, providing theoretical guarantees. We then evaluate the framework's efficacy ``in the wild'' via computational and human-subject experiments. Our findings suggest that our framework offers a promising direction for fostering effective human-AI interactions in domains where explainability is important.

  • Conference paper
    Battaglia E, Baroni P, Rago A, Toni Fet al., 2024,

    Integrating user preferences into gradual bipolar argumentation for personalised decision support

    , Scalable Uncertainty Management, 16th International Conference (SUM 2024), Publisher: Springer, Pages: 14-28, ISSN: 1611-3349

    Gradual bipolar argumentation has been shown to be aneffective means for supporting decisions across a number of domains. Individual user preferences can be integrated into the domain knowledge represented by such argumentation frameworks and should be taken into account in order to provide personalised decision support. This howeverrequires the definition of a suitable method to handle user-provided preferences in gradual bipolar argumentation, which has not been considered in previous literature. Towards filling this gap, we develop a conceptual analysis on the role of preferences in argumentation and investigate some basic principles concerning the effects they should have on the evaluation of strength in gradual argumentation semantics. We illustrate an application of our approach in the context of a review aggregation system, which has been enhanced with the ability to produce personalisedoutcomes based on user preferences.

  • Conference paper
    Rago A, Vasileiou SL, Toni F, Son TC, Yeoh Wet al., 2024,

    A Methodology for Gradual Semantics for Structured Argumentation under Incomplete Information

    , ArXiv
  • Journal article
    Kampik T, Potyka N, Yin X, Čyras K, Toni Fet al., 2024,

    Contribution functions for quantitative bipolar argumentation graphs: a principle-based analysis

    , International Journal of Approximate Reasoning, Vol: 173, ISSN: 0888-613X

    We present a principle-based analysis of contribution functions for quantitative bipolar argumentation graphs that quantify the contribution of one argument to another. The introduced principles formalise the intuitions underlying different contribution functions as well as expectations one would have regarding the behaviour of contribution functions in general. As none of the covered contribution functions satisfies all principles, our analysis can serve as a tool that enables the selection of the most suitable function based on the requirements of a given use case.

  • Conference paper
    Rapberger A, Toni F, 2024,

    On the robustness of argumentative explanations

    , 10th International Conference on Computational Models of Argument (COMMA 2024), Publisher: IOS Press, Inc., Pages: 217-228

    The field of explainable AI has grown exponentially in recent years.Within this landscape, argumentation frameworks have shown to be helpful ab-stractions of some AI models towards providing explanations thereof. While exist-ing work on argumentative explanations and their properties has focused on staticsettings, we focus on dynamic settings whereby the (AI models underpinning the)argumentation frameworks need to change. Specifically, for a number of notionsof explanations drawn from abstract argumentation frameworks under extension-based semantics, we address the following questions: (1) Are explanations robust toextension-preserving changes, in the sense that they are still valid when the changesdo not modify the extensions? (2) If not, are these explanations pseudo-robust inthat can be tractably updated? In this paper, we frame these questions formally. Weconsider robustness and pseudo-robustness w.r.t. ordinary and strong equivalenceand provide several results for various extension-based semantics.

  • Conference paper
    Lehtonen T, Rapberger A, Toni F, Ulbricht M, Wallner JPet al., 2024,

    On computing admissibility in ABA

    , 10th International Conference on Computational Models of Argument (COMMA 2024), Publisher: IOS Press, Inc., Pages: 121-132

    Most existing computational tools for assumption-based argumentation (ABA) focus on so-called flat frameworks, disregarding the more general case. Here, we study an instantiation-based approach for reasoning in possibly non-flat ABA. For complete-based semantics, an approach of this kind was recently introduced, based on a semantics-preserving translation between ABA and bipolar argumentation frameworks (BAFs). Admissible semantics, however, require us to consider an extension of BAFs which also makes use of premises of arguments (pBAFs).We explore basic properties of pBAFs which we require as a theoretical underpinning for our proposed instantiation-based solver for non-flat ABA under admissible semantics. As our empirical evaluation shows, depending on the ABA instances, the instantiation-based solver is competitive against an ASP-based approach implemented in the style of state-of-the-art solvers for hard argumentation problems.

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://www.imperial.ac.uk:80/respub/WEB-INF/jsp/search-t4-html.jsp Request URI: /respub/WEB-INF/jsp/search-t4-html.jsp Query String: id=1259&limit=20&page=2&respub-action=search.html Current Millis: 1775055112486 Current Time: Wed Apr 01 15:51:52 BST 2026