Recently, Moore-Penrose inverse (MPI)-based parameter fine-tuning of fully connected (FC) layers in pretrained deep convolutional neural networks (DCNNs) has emerged within the inductive transfer learning (ITL) paradigm. However, this approach has not gained significant traction in practical applications due to its stringent computational requirements. This work addresses this issue through a novel fast retraining strategy that enhances applicability of the MPI-based ITL. Specifically, during each retraining epoch, a random layer freezing protocol is utilized to manage the number of layers undergoing feature refinement. Additionally, this work incorporates an MPI-based approach for refining the trainable parameters of FC layers under batch processing, contributing to expedited convergence. Extensive experiments on several ImageNet pretrained benchmark DCNNs demonstrate that the proposed ITL achieves competitive performance with excellent convergence speed compared to conventional ITL methods. For instance, the proposed strategy converges nearly 1.5 times faster than retraining the ImageNet pretrained ResNet-50 using stochastic gradient descent with momentum (SGDM).

Download full-text PDF

Source
http://dx.doi.org/10.1109/TCYB.2024.3483068DOI Listing

Publication Analysis

Top Keywords

transfer learning
8
random layer
8
layer freezing
8
feature refinement
8
imagenet pretrained
8
fast transfer
4
learning method
4
method random
4
freezing feature
4
refinement strategy
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!