Understanding the effect of sparsity on neural networks' robustness

Lukas Timpl*, Rahim Entezari, Hanie Sedghi, Behnam Neyshabur, Olga Saukh

*Corresponding author for this work

Research output: Contribution to conferencePosterpeer-review

Abstract

This paper examines the impact of static sparsity on the robustness of a trained network to weigh perturbations, data corruption, and adversarial examples. We show that, up to a certain sparsity achieved by increasing network width and depth while keeping the network capacity fixed, sparsified networks consistently match and often outperform their initially dense versions. Robustness and accuracy decline simultaneously for very high sparsity due to loose connectivity between network layers. Our findings show that a rapid robustness drop caused by network compression observed in the literature is due to a reduced net-work capacity rather than sparsity.
Original languageEnglish
Publication statusPublished - 8 Jul 2021
EventSparsity in Neural Networks - Advancing Understanding and Practice: SNN Workshop 2021 - Virtual
Duration: 8 Jul 20219 Jul 2021
https://sites.google.com/view/sparsity-workshop-2021/home?authuser=0

Workshop

WorkshopSparsity in Neural Networks - Advancing Understanding and Practice
CityVirtual
Period8/07/219/07/21
Internet address

Fingerprint

Dive into the research topics of 'Understanding the effect of sparsity on neural networks' robustness'. Together they form a unique fingerprint.

Cite this