On-the-fly network pruning for object detection

Marc Masana Castrillo*, Joost van de Weijer, Andrew D. Bagdanov

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review

Abstract

Object detection with deep neural networks is often performed by passing a few thousand candidate bounding boxes through a deep neural network for each image. These bounding boxes are highly correlated since they originate from the same image. In this paper we investigate how to exploit feature occurrence at the image scale to prune the neural network which is subsequently applied to all bounding boxes. We show that removing units which have near-zero activation in the image allows us to significantly reduce the number of parameters in the network. Results on the PASCAL 2007 Object Detection Challenge demonstrate that up to 40% of units in some fully-connected layers can be entirely eliminated with little change in the detection result.
Original languageEnglish
Title of host publicationInternational Conference on Learning Representations (ICLR)
Publication statusPublished - 2016
Externally publishedYes

Publication series

NamearXiv preprint arXiv:1605.03477

Fingerprint

Dive into the research topics of 'On-the-fly network pruning for object detection'. Together they form a unique fingerprint.

Cite this