Paper
12 June 2023 Robust, fair, and trustworthy artificial reasoning systems via quantitative causal learning and explainability
Author Affiliations +
Abstract
A major concern for artificial reasoning systems to achieve robustness and trustworthiness is causal learning where better explanations are needed to support the underlying tasks. Explanations for observational datasets without ground truth presents a unique challenge. This paper aims to provide a new perspective on explainability and causality by combining the two together. We propose a model which extracts quantitative knowledge from observational data via treatment effect estimation to create better explanations through comparison and validation of the causal features with results from correlation-based feature relevance explanations. Average treatment effect (ATE) estimation is calculated to provide a quantitative comparison of the causal features to the relevant features from explainable AI (XAI). This approach provides a comprehensive approach to generate robust and trustworthy explanations via validations from both causality and XAI to ensure trustworthiness, fairness and bias detection within the data, as well as the AI/ML models for artificial reasoning systems.
© (2023) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Atul Rawal, Adrienne Raglin, Brian M. Sadler, and Danda B. Rawat "Robust, fair, and trustworthy artificial reasoning systems via quantitative causal learning and explainability", Proc. SPIE 12538, Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications V, 125380B (12 June 2023); https://doi.org/10.1117/12.2666086
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Machine learning

Artificial intelligence

Back to Top