Publication

Anytime inference with distilled hierarchical neural ensembles

Conference Article

Conference

AAAI Conference on Artificial Intelligence (AAAI)

Edition

35th

Pages

9463-9471

Doc link

https://ojs.aaai.org/index.php/AAAI/article/view/17140

File

Download the digital copy of the doc pdf document

Authors

Abstract

Inference in deep neural networks can be computationally expensive, and networks capable of anytime inference are important in mscenarios where the amount of compute or quantity of input data varies over time. In such networks the inference process can interrupted to provide a result faster, or continued to obtain a more accurate result. We propose Hierarchical Neural Ensembles (HNE), a novel framework to embed an ensemble of multiple networks in a hierarchical tree structure, sharing intermediate layers. In HNE we control the complexity of inference on-the-fly by evaluating more or less models in the ensemble. Our second contribution is a novel hierarchical distillation method to boost the prediction accuracy of small ensembles. This approach leverages the nested structure of our ensembles, to optimally allocate accuracy and diversity across the individual models. Our experiments show that, compared to previous anytime inference models, HNE provides state-of-the-art accuracy-computate trade-offs on the CIFAR-10/100 and ImageNet datasets.





Categories

computer vision.

Author keywords

Deep Learning, Adaptive Networks, Efficient Inference

Scientific reference

A. Ruiz and J. Verbeek. Anytime inference with distilled hierarchical neural ensembles, 35th AAAI Conference on Artificial Intelligence, 2021, (Virtual), pp. 9463-9471.