Fine Granularity Is Critical for Intelligent Neural Network Pruning.

Neural Comput

Learning in Machines and Brains Program, Canadian Institute for Advanced Research Toronto, ON, CA M5G 1M1

Published: November 2024

AI Article Synopsis

  • Neural network pruning helps make computer models faster and cheaper to run while trying to keep their accuracy high.
  • There are different ways to prune, like removing specific parts (fine pruning) or bigger pieces (coarse pruning), and we tested how well these methods work on different types of image tasks.
  • We found that fine pruning is better at keeping accuracy compared to coarse pruning, especially when it comes to using the networks efficiently.

Article Abstract

Neural network pruning is a popular approach to reducing the computational costs of training and/or deploying a network and aims to do so while minimizing accuracy loss. Pruning methods that remove individual weights (fine granularity) can remove more total network parameters before reaching a given degree of accuracy loss, while methods that preserve some or all of a network's structure (coarser granularity, such as pruning channels from a CNN) take better advantage of hardware and software optimized for dense matrix computations. We compare intelligent iterative pruning using several different criteria sampled from the literature against random pruning at initialization across multiple granularities on two different architectures and three image classification tasks. Our work is the first direct and comprehensive investigation of the relationship between granularity and the efficacy of intelligent pruning relative to a random-pruning baseline. We find that the accuracy advantage of intelligent over random pruning decreases dramatically as granularity becomes coarser, with minimal advantage for intelligent pruning at granularity coarse enough to fully preserve network structure. For instance, at pruning rates where random pruning leaves ResNet-20 at 85.0% test accuracy on CIFAR-10 after 30,000 training iterations, intelligent weight pruning with the best-in-context criterion leaves it at about 90.0% accuracy (on par with the unpruned network), kernel pruning leaves it at about 86.5%, and channel pruning leaves it at about 85.5%. Our results suggest that compared to coarse pruning, fine pruning combined with efficient implementation of the resulting networks is a more promising direction for easing the trade-off between high accuracy and low computational cost.

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01717DOI Listing

Publication Analysis

Top Keywords

pruning
16
random pruning
12
pruning leaves
12
fine granularity
8
neural network
8
network pruning
8
accuracy loss
8
intelligent pruning
8
advantage intelligent
8
intelligent
6

Similar Publications

Model compression for real-time object detection using rigorous gradation pruning.

iScience

January 2025

Faculty of Engineering, Technology and Built Environment, UCSI University, Kuala Lumpur, Malaysia.

Achieving lightweight real-time object detection necessitates balancing model compression with detection accuracy, a difficulty exacerbated by low redundancy and uneven contributions from convolutional layers. As an alternative to traditional methods, we propose Rigorous Gradation Pruning (RGP), which uses a desensitized first-order Taylor approximation to assess filter importance, enabling precise pruning of redundant kernels. This approach includes the iterative reassessment of layer significance to protect essential layers, ensuring effective detection performance.

View Article and Find Full Text PDF

Browsing by ungulates is commonly assumed to target the upper parts of sapling crowns, leading to reduced vertical growth or even growth cessation. However, the extent to which browsing induces shifts in resource allocation toward lateral growth remains unclear. This study explores the impact of browsing intensity (BI) and light availability on the architectural traits of six temperate tree species, focusing on height-diameter ratio (H/D), crown slenderness (CL/CW), and crown irregularity (CI) across sapling height classes.

View Article and Find Full Text PDF

Glial-derived TNF/Eiger signaling promotes somatosensory neurite sculpting.

Cell Mol Life Sci

January 2025

School of Life Science and Technology, The Key Laboratory of Developmental Genes and Human Disease, Southeast University, Nanjing, China.

The selective elimination of inappropriate projections is essential for sculpting neural circuits during development. The class IV dendritic arborization (C4da) sensory neurons of Drosophila remodel the dendritic branches during metamorphosis. Glial cells in the central nervous system (CNS), are required for programmed axonal pruning of mushroom body (MB) γ neurons during metamorphosis in Drosophila.

View Article and Find Full Text PDF

Comparative assessment of empirical and hybrid machine learning models for estimating daily reference evapotranspiration in sub-humid and semi-arid climates.

Sci Rep

January 2025

Prince Sultan Bin Abdulaziz International Prize for Water Chair, Prince Sultan Institute for Environmental, Water and Desert Research, King Saud University, P.O. Box 2454, Riyadh 11451, Saudi Arabia.

Improving the accuracy of reference evapotranspiration (RET) estimation is essential for effective water resource management, irrigation planning, and climate change assessments in agricultural systems. The FAO-56 Penman-Monteith (PM-FAO56) model, a widely endorsed approach for RET estimation, often encounters limitations due to the lack of complete meteorological data. This study evaluates the performance of eight empirical models and four machine learning (ML) models, along with their hybrid counterparts, in estimating daily RET within the Gharb and Loukkos irrigated perimeters in Morocco.

View Article and Find Full Text PDF

The beast and the burden: will pruning performance measurement improve quality?

BMJ Qual Saf

January 2025

National Committee for Quality Assurance, Washington, District of Columbia, USA

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!