In this paper we show that the performance of binary classifiers based on Boosted Random Ferns can be significantly improved by appropriately bootstrapping the training step. This results in a classifier which is both highly discriminant and computationally efficient and is particularly suitable when only small sets of training images are available. During the learning process, a small set of labeled images is used to train the boosting binary classifier. The classifier is then evaluated over the training set and warped versions of the classified and misclassified patches are progressively added into the positive and negative sample sets for a new retraining step. In this paper we thoroughly study the conditions under which this bootstrapping scheme improves the detection rates. In particular we assess the quality of detection both as a function of the number of bootstrapping iterations and the size of the training set. We compare our algorithm against state-of-the-art approaches for several databases including

faces, cars, motorbikes and horses, and show remarkable improvements in detection rates with just a few bootstrapping steps.


pattern recognition.

Author keywords

object detection, boosting, boostraping, random ferns

Scientific reference

M. Villamizar, J. Andrade-Cetto, A. Sanfeliu and F. Moreno-Noguer. Bootstrapping boosted random Ferns for discriminative and efficient object classification. Pattern Recognition, 45(9): 3141-3153, 2012.