Hide metadata

dc.date.accessioned2022-02-19T19:32:36Z
dc.date.available2022-02-19T19:32:36Z
dc.date.created2021-11-20T07:13:00Z
dc.date.issued2021
dc.identifier.citationYeom, Seul-Ki Seegerer, Philipp Lapuschkin, Sebastian Binder, Alexander Wiedemann, Simon Müller, Klaus-Robert Samek, Wojciech . Pruning by explaining: A novel criterion for deep neural network pruning. Pattern Recognition. 2021, 115
dc.identifier.urihttp://hdl.handle.net/10852/91165
dc.description.abstractThe success of convolutional neural networks (CNNs) in various applications is accompanied by a significant increase in computation and parameter storage costs. Recent efforts to reduce these overheads involve pruning and compressing the weights of various layers while at the same time aiming to not sacrifice performance. In this paper, we propose a novel criterion for CNN pruning inspired by neural network interpretability: The most relevant units, i.e. weights or filters, are automatically found using their relevance scores obtained from concepts of explainable AI (XAI). By exploring this idea, we connect the lines of interpretability and model compression research. We show that our proposed method can efficiently prune CNN models in transfer-learning setups in which networks pre-trained on large corpora are adapted to specialized tasks. The method is evaluated on a broad range of computer vision datasets. Notably, our novel criterion is not only competitive or better compared to state-of-the-art pruning criteria when successive retraining is performed, but clearly outperforms these previous criteria in the resource-constrained application scenario in which the data of the task to be transferred to is very scarce and one chooses to refrain from fine-tuning. Our method is able to compress the model iteratively while maintaining or even improving accuracy. At the same time, it has a computational cost in the order of gradient computation and is comparatively simple to apply without the need for tuning hyperparameters for pruning.
dc.languageEN
dc.rightsAttribution 4.0 International
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.titlePruning by explaining: A novel criterion for deep neural network pruning
dc.typeJournal article
dc.creator.authorYeom, Seul-Ki
dc.creator.authorSeegerer, Philipp
dc.creator.authorLapuschkin, Sebastian
dc.creator.authorBinder, Alexander
dc.creator.authorWiedemann, Simon
dc.creator.authorMüller, Klaus-Robert
dc.creator.authorSamek, Wojciech
cristin.unitcode185,15,5,47
cristin.unitnameDigital signalbehandling og bildeanalyse
cristin.ispublishedtrue
cristin.fulltextoriginal
cristin.qualitycode2
dc.identifier.cristin1956708
dc.identifier.bibliographiccitationinfo:ofi/fmt:kev:mtx:ctx&ctx_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.jtitle=Pattern Recognition&rft.volume=115&rft.spage=&rft.date=2021
dc.identifier.jtitlePattern Recognition
dc.identifier.volume115
dc.identifier.doihttps://doi.org/10.1016/j.patcog.2021.107899
dc.identifier.urnURN:NBN:no-93762
dc.type.documentTidsskriftartikkel
dc.type.peerreviewedPeer reviewed
dc.source.issn0031-3203
dc.identifier.fulltextFulltext https://www.duo.uio.no/bitstream/handle/10852/91165/1/1-s2.0-S0031320321000868-main.pdf
dc.type.versionPublishedVersion
cristin.articleid107899
dc.relation.projectNFR/309439


Files in this item

Appears in the following Collection

Hide metadata

Attribution 4.0 International
This item's license is: Attribution 4.0 International