Neural Network Compression via Learnable Wavelet Transforms

Moritz Wolter, Shaohui Lin, and Angela Yao
In proceedings of International Conference on Artificial Neural Networks, Springer, 2020
 

Abstract

Wavelets are well known for data compression, yet have rarely been applied to the compression of neural networks. This paper shows how the fast wavelet transform can be used to compress linear layers in neural networks. Linear layers still occupy a significant portion of the parameters in recurrent neural networks (RNNs). Through our method, we can learn both the wavelet bases and cor-responding coefficients to efficiently represent the linear layers of RNNs. Our wavelet compressed RNNs have significantly fewer parameters yet still perform competitively with the state-of-the-art on synthetic and real-world RNN benchmarks. Wavelet optimization adds basis flexibility, without large numbers of extra weights

Keywords: network compression, wavelets

Source code available at https://github.com/v0lta/Wavelet-network-compression .

Images

Bibtex

@INPROCEEDINGS{wolter2020neural,
     author = {Wolter, Moritz and Lin, Shaohui and Yao, Angela},
      title = {Neural Network Compression via Learnable Wavelet Transforms},
  booktitle = {International Conference on Artificial Neural Networks},
       year = {2020},
  publisher = {Springer},
   keywords = {network compression, wavelets},
   abstract = {Wavelets are well known for data compression, yet have rarely been applied to the compression of
               neural networks.  This paper shows how the fast wavelet transform can be used to compress linear
               layers in neural networks. Linear layers still occupy a significant portion of the parameters in
               recurrent neural networks (RNNs).  Through our method, we can learn both the wavelet bases and
               cor-responding coefficients to efficiently represent the linear layers of RNNs.  Our wavelet
               compressed RNNs have significantly fewer parameters yet still perform competitively with the
               state-of-the-art on synthetic and real-world RNN benchmarks. Wavelet optimization adds basis
               flexibility, without large numbers of extra weights},
        url = {https://arxiv.org/pdf/2004.09569.pdf}
}