KEYWORDS: Machine learning, Artificial intelligence, Systems modeling, Evolutionary algorithms, Algorithm development, Library classification systems, Data modeling, Random forests, Performance modeling, Education and training
Artificial intelligence (AI) and machine learning (ML) systems are required to be fair and trustworthy. They must be capable of bias detection and mitigation to achieve robustness. To this end, a plethora of research fields have seen growth in research related to making AI/ML systems more trustworthy. Causal learning and Explainable AI (XAI) are two such fields that have been used extensively in the past few years to achieve explainability and fairness. However, they have been used as separate methodologies, not together. This paper provides a new perspective in using causal learning and XAI together to create a more robust and trustworthy system. Having causality and explainability together in the same model presents an extra layer of robustness, that is not achieved by using either of them individually. We present a use case for combining causality via causal discovery, and explainability via feature relevance. Using causal discovery, the generated causal graphs are compared to the feature relevance plots from the ML model. Directed causal graphs can display the features that are causally relevant for the predictions, and these causally relevant features can be directly compared to the features listed from correlation-based explanations from XAI.
A major concern for artificial reasoning systems to achieve robustness and trustworthiness is causal learning where better explanations are needed to support the underlying tasks. Explanations for observational datasets without ground truth presents a unique challenge. This paper aims to provide a new perspective on explainability and causality by combining the two together. We propose a model which extracts quantitative knowledge from observational data via treatment effect estimation to create better explanations through comparison and validation of the causal features with results from correlation-based feature relevance explanations. Average treatment effect (ATE) estimation is calculated to provide a quantitative comparison of the causal features to the relevant features from explainable AI (XAI). This approach provides a comprehensive approach to generate robust and trustworthy explanations via validations from both causality and XAI to ensure trustworthiness, fairness and bias detection within the data, as well as the AI/ML models for artificial reasoning systems.
KEYWORDS: Data fusion, Data modeling, Sensors, Computer security, Image encryption, Defense and security, Systems modeling, Analytical research, Head, Fuzzy logic
Advancements in computer science, especially artificial intelligence (AI) and machine learning(ML) have brought about a scientific revolution in a plethora of military and commercial applications. One of such area has been data science, where the sheer astronomical amount of available data has spurred sub-fields of research involving its storage, analysis, and use. One such area of focus in recent years has been the fusion of data coming from multiple modalities, called multi-modal data fusion, and their use and analysis for practical and employable applications. Because of the differences within the data types, ranging from infrared/radio-frequency to audio/visual, it is extremely difficult, if not flat-out impossible, to analyze them via one singular method. The need to fuse multiple data types and sources properly and adequately for analysis, therefore arises an extra degree of freedom for data-science. This paper provides a survey for multi-modal data fusion. We provide an in-depth review of multi-modal data fusion themes, and describe the methods for designing and developing such data fusion techniques. We include an overview for the different methods and levels of data fusion techniques. An overview of security of data-fusion techniques, is also provided which highlights the present gaps within the field that need to be addressed.
The recent advances in machine learning (ML) and Artificial Intelligence (AI) have resulted in widespread application of data-driven learning algorithms. Rapid growth of AI/ML and their penetration within a plethora of civilian and military applications, while successful, has also opened new vulnerabilities. It is now clear that ML algorithms for AI systems are viable targets for malicious attacks. Therefore, there is a pressing need for better understanding of adversarial attacks against ML models, in order to secure them against such malicious attacks. In this paper, we present a survey of adversarial machine learning and some associated countermeasures. We also present a taxonomy of ML/AI system attacks that follow the same properties and characteristics, allowing them to be linked with different defensive approaches. A taxonomy is given for both attack and defense, and attacks proposed in the literature are categorized according to our taxonomy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.