Generating attribution maps with disentangled masked backpropagation

Conference Article


International Conference on Computer Vision (ICCV)





Doc link


Download the digital copy of the doc pdf document


Attribution map visualization has arisen as one of the most effective techniques to understand the underlying inference process of Convolutional Neural Networks. In this task, the goal is to compute an score for each image pixel related to its contribution to the network output. In this paper, we introduce Disentangled Masked Backpropagation (DMBP), a novel gradient-based method that leverages on the piecewise linear nature of ReLU networks to decompose the model function into different linear mappings. This decomposition aims to disentangle the attribution maps into positive, negative and nuisance factors by learning a set of variables masking the contribution of each filter during back-propagation. A thorough evaluation over standard architectures (ResNet50 and VGG16) and benchmark datasets (PASCAL VOC and ImageNet) demonstrates that DMBP generates more visually interpretable attribution maps than previous approaches. Additionally, we quantitatively show that the maps produced by our method are more consistent with the true contribution of each pixel to the final network output.


computer vision.

Author keywords

Interpretable AI, Attribution Map Generation

Scientific reference

A. Ruiz, A. Agudo and F. Moreno-Noguer. Generating attribution maps with disentangled masked backpropagation, 2021 International Conference on Computer Vision, 2021, Virtual, pp. 905-914, to appear.