Home

Awesome

Adversarial Attacks and Defenses in Explainable AI

A curated list of papers concerning adversarial explainable AI (AdvXAI).

Survey

February, 2024: The survey is now published in <em>Information Fusion</em> at https://doi.org/10.1016/j.inffus.2024.102303

September, 2023: An extended version of the paper is now available on arXiv

June, 2023: We summarized the current state of the AdvXAI field in the following survey paper (work in progress)

H. Baniecki, P. Biecek. Adversarial Attacks and Defenses in Explainable Artificial Intelligence: A Survey. IJCAI Workshop on XAI, 2023.

Abstract

<p align="center"> <a href="https://doi.org/10.1016/j.inffus.2024.102303"> <img src="fig/abstract.png"> </a> </p>

Explainable artificial intelligence (XAI) methods are portrayed as a remedy for debugging and trusting statistical and deep learning models, as well as interpreting their predictions. However, recent advances in adversarial machine learning (AdvML) highlight the limitations and vulnerabilities of state-of-the-art explanation methods, putting their security and trustworthiness into question. The possibility of manipulating, fooling or fairwashing evidence of the model's reasoning has detrimental consequences when applied in high-stakes decision-making and knowledge discovery. This survey provides a comprehensive overview of research concerning adversarial attacks on explanations of machine learning models, as well as fairness metrics. We introduce a unified notation and taxonomy of methods facilitating a common ground for researchers and practitioners from the intersecting research fields of AdvML and XAI. We discuss how to defend against attacks and design robust interpretation methods. We contribute a list of existing insecurities in XAI and outline the emerging research directions in adversarial XAI (AdvXAI). Future work should address improving explanation methods and evaluation protocols to take into account the reported safety issues.

Citation

@article{baniecki2024adversarial,
  author  = {Hubert Baniecki and Przemyslaw Biecek},
  title   = {Adversarial attacks and defenses in 
             explainable artificial intelligence: A survey},
  journal = {Information Fusion},
  volume  = {107},
  pages   = {102303},
  year    = {2024}
}

Related surveys

Background (2018)

Adversarial attacks on model explanations

Defense against the attacks on explanations

More towards robust and stable explanations

Adversarial attacks on fairness metrics

<details> <summary> <h3> Related evaluations of explanations </h3> </summary> </details> <details> <summary> <h3> Further related papers </h3> </summary> </details> <p align="center"> <a href="https://arxiv.org/abs/2306.06123"> <img src="fig/preview.png"> </a> </p> <p align="center"> <em>Veritas Vincit</em> </p>