UPSCALE: Unconstrained Channel Pruning – Apple Machine Learning Research


Modern neural networks are growing not only in size and complexity but also in inference time. One of the most effective compression techniques — channel pruning — combats this trend by removing channels from convolutional weights to reduce resource consumption. However, removing channels is non-trivial for multi-branch segments of a model, which can introduce extra memory copies at inference time. These copies incur increase latency — so much so, that the pruned model is even slower than the original, unpruned model. As a workaround, existing pruning works constrain certain channels to be pruned together. This fully eliminates inference-time memory copies, but as we show, these constraints significantly impair accuracy. To solve both challenges, our insight is to enable unconstrained pruning by reordering channels to minimize memory copies. Using this insight, we design a generic algorithm UCPE to prune models with any pruning pattern. Critically, by removing constraints from existing pruning heuristics, we improve ImageNet top-1 accuracy for post-training pruning by 2.1 points on average — benefiting pruned DenseNet (+16.9), EfficientNetV2 (+7.9), and ResNet (+6.2). Furthermore, our UCPE algorithm reduces latency by up to 52.8{29fe85292aceb8cf4c6c5bf484e3bcf0e26120073821381a5855b08e43d3ac09} when compared with naive unconstrained pruning, nearly fully eliminating memory copies at inference time.



Source link