Understanding the effect of sparsity on neural networks' robustness

Lukas Timpl*, Rahim Entezari, Hanie Sedghi, Behnam Neyshabur, Olga Saukh

*Korrespondierende/r Autor/-in für diese Arbeit

Publikation: KonferenzbeitragPosterBegutachtung

Abstract

This paper examines the impact of static sparsity on the robustness of a trained network to weigh perturbations, data corruption, and adversarial examples. We show that, up to a certain sparsity achieved by increasing network width and depth while keeping the network capacity fixed, sparsified networks consistently match and often outperform their initially dense versions. Robustness and accuracy decline simultaneously for very high sparsity due to loose connectivity between network layers. Our findings show that a rapid robustness drop caused by network compression observed in the literature is due to a reduced net-work capacity rather than sparsity.
Originalspracheenglisch
PublikationsstatusVeröffentlicht - 8 Juli 2021
VeranstaltungSparsity in Neural Networks - Advancing Understanding and Practice: SNN Workshop 2021 - Virtual
Dauer: 8 Juli 20219 Juli 2021
https://sites.google.com/view/sparsity-workshop-2021/home?authuser=0

Workshop

WorkshopSparsity in Neural Networks - Advancing Understanding and Practice
OrtVirtual
Zeitraum8/07/219/07/21
Internetadresse

Fingerprint

Untersuchen Sie die Forschungsthemen von „Understanding the effect of sparsity on neural networks' robustness“. Zusammen bilden sie einen einzigartigen Fingerprint.

Dieses zitieren