Parameterized Structured Pruning for Deep Neural Networks

Günther Schindler*, Wolfgang Roth, Franz Pernkopf, Holger Fröning

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review

Abstract

As a result of the growing size of Deep Neural Networks (DNNs), the gap to hardware capabilities in terms of memory and compute increases. To effectively compress DNNs, quantization and pruning are usually considered. However, unconstrained pruning usually leads to unstructured parallelism, which maps poorly to massively parallel processors, and substantially reduces the efficiency of general-purpose processors. Similar applies to quantization, which often requires dedicated hardware. We propose Parameterized Structured Pruning (PSP), a novel technique to dynamically learn the shape of DNNs through structured sparsity. PSP parameterizes structures (e.g. channel- or layer-wise) in a weight tensor and leverages weight decay to learn a clear distinction between important and unimportant structures. As a result, PSP maintains prediction performance, creates a substantial amount of sparsity that is structured and, thus, easy and efficient to map to a variety of massively parallel processors, which are mandatory for utmost compute power and energy efficiency.

Original languageEnglish
Title of host publicationMachine Learning, Optimization, and Data Science - 6th International Conference, LOD 2020, Revised Selected Papers
EditorsGiuseppe Nicosia, Varun Ojha, Emanuele La Malfa, Giorgio Jansen, Vincenzo Sciacca, Panos Pardalos, Giovanni Giuffrida, Renato Umeton
PublisherSpringer, Cham
Pages16-27
Number of pages12
ISBN (Electronic)978-3-030-64580-9
ISBN (Print)978-3-030-64579-3
DOIs
Publication statusPublished - 1 Jan 2020
Event6th International Conference on Machine Learning, Optimization, and Data Science : LOD 2020 - Virtuell, Siena, Italy
Duration: 19 Jul 202023 Jul 2020

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume12566 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference6th International Conference on Machine Learning, Optimization, and Data Science
Abbreviated titleLOD 2020
Country/TerritoryItaly
CityVirtuell, Siena
Period19/07/2023/07/20

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Computer Science(all)

Fingerprint

Dive into the research topics of 'Parameterized Structured Pruning for Deep Neural Networks'. Together they form a unique fingerprint.

Cite this