Recently, Moore-Penrose inverse (MPI)-based parameter fine-tuning of fully connected (FC) layers in pretrained deep convolutional neural networks (DCNNs) has emerged within the inductive transfer learning (ITL) paradigm. However, this approach has not gained significant traction in practical applications due to its stringent computational requirements. This work addresses this issue through a novel fast retraining strategy that enhances applicability of the MPI-based ITL. Specifically, during each retraining epoch, a random layer freezing protocol is utilized to manage the number of layers undergoing feature refinement. Additionally, this work incorporates an MPI-based approach for refining the trainable parameters of FC layers under batch processing, contributing to expedited convergence. Extensive experiments on several ImageNet pretrained benchmark DCNNs demonstrate that the proposed ITL achieves competitive performance with excellent convergence speed compared to conventional ITL methods. For instance, the proposed strategy converges nearly 1.5 times faster than retraining the ImageNet pretrained ResNet-50 using stochastic gradient descent with momentum (SGDM).
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/TCYB.2024.3483068 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!