Learning-AI

BatchNorm Pruning: Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers

May 2020

tl;dr: Similar idea to Network Slimming but with more details.

Overall impression

Two questions to answer:

Many previous works are norm-based pruning, which do not have solid theoretical foundation. One cannot assign different weights to the Lasso regularization to diff layers, as we can perform model reparameterization to reduce Lasso loss. In addition, in the presence of BN, any linear scaling of W will not change results.

Key ideas

Technical details

Notes