Publication

Generating attribution maps with disentangled masked backpropagation

Conference Article

Conference

International Conference on Computer Vision (ICCV)

Edition

2021

Pages

885-894

Doc link

https://doi.org/10.1109/ICCV48922.2021.00094

File

Download the digital copy of the doc pdf document

Abstract

Attribution map visualization has arisen as one of the most effective techniques to understand the underlying inference process of Convolutional Neural Networks. In this task, the goal is to compute an score for each image pixel related to its contribution to the network output. In this paper, we introduce Disentangled Masked Backpropagation (DMBP), a novel gradient-based method that leverages on the piecewise linear nature of ReLU networks to decompose the model function into different linear mappings. This decomposition aims to disentangle the attribution maps into positive, negative and nuisance factors by learning a set of variables masking the contribution of each filter during back-propagation. A thorough evaluation over standard architectures (ResNet50 and VGG16) and benchmark datasets (PASCAL VOC and ImageNet) demonstrates that DMBP generates more visually interpretable attribution maps than previous approaches. Additionally, we quantitatively show that the maps produced by our method are more consistent with the true contribution of each pixel to the final network output.

Categories

computer vision.

Author keywords

Interpretable AI, Attribution Map Generation

Scientific reference

A. Ruiz, A. Agudo and F. Moreno-Noguer. Generating attribution maps with disentangled masked backpropagation, 2021 International Conference on Computer Vision, 2021, Montreal, Canada, pp. 885-894.