Publication

Debiased-CAM to mitigate image perturbations with faithful visual explanations of machine learning

Conference Article

Conference

ACM Conference on Human Factors in Computing Systems (CHI)

Edition

2022

Pages

1-32

Doc link

https://doi.org/10.1145/3491102.3517522

File

Download the digital copy of the doc pdf document

Authors

Abstract

Model explanations such as saliency maps can improve user trust in AI by highlighting important features for a prediction. However, these become distorted and misleading when explaining predictions of images that are subject to systematic error (bias). Furthermore, the distortions persist despite model fine-tuning on images biased by different factors (blur, color temperature, day/night). We present Debiased-CAM to recover explanation faithfulness across various bias types and levels by training a multi-input, multi-task model with auxiliary tasks for explanation and bias level predictions. In simulation studies, the approach not only enhanced prediction accuracy, but also generated highly faithful explanations about these predictions as if the images were unbiased. In user studies, debiased explanations improved user task performance, perceived truthfulness and perceived helpfulness. Debiased training can provide a versatile platform for robust performance and explanation faithfulness for a wide range of applications with data biases.

Categories

artificial intelligence, pattern classification.

Author keywords

Explainable AI, model explanations, Class activation map, Saliency, Computer vision, Bias, User studies

Scientific reference

W. Zhang, M. Dimiccoli and B.Y. Lim. Debiased-CAM to mitigate image perturbations with faithful visual explanations of machine learning , 2022 ACM Conference on Human Factors in Computing Systems, 2022, New Orleans, pp. 1-32, ACM.