IEEE Trans Neural Netw Learn Syst
January 2024
Model compression methods are being developed to bridge the gap between the massive scale of neural networks and the limited hardware resources on edge devices. Since most real-world applications deployed on resource-limited hardware platforms typically have multiple hardware constraints simultaneously, most existing model compression approaches that only consider optimizing one single hardware objective are ineffective. In this article, we propose an automated pruning method called multi-constrained model compression (MCMC) that allows for the optimization of multiple hardware targets, such as latency, floating point operations (FLOPs), and memory usage, while minimizing the impact on accuracy.
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
August 2024
Recently value-based centralized training with decentralized execution (CTDE) multi-agent reinforcement learning (MARL) methods have achieved excellent performance in cooperative tasks. However, the most representative method among these methods, Q-network MIXing (QMIX), restricts the joint action Q values to be a monotonic mixing of each agent's utilities. Furthermore, current methods cannot generalize to unseen environments or different agent configurations, which is known as ad hoc team play situation.
View Article and Find Full Text PDFDurability and reliability are the major bottlenecks of the proton-exchange-membrane fuel cell (PEMFC) for large-scale commercial deployment. With the help of prognostic approaches, we can reduce its maintenance cost and maximize its lifetime. This paper proposes a hybrid prognostic method for PEMFCs based on a decomposition forecasting framework.
View Article and Find Full Text PDFIEEE Trans Image Process
September 2022
This paper focuses on the mask utilization of video object segmentation (VOS). The mask here mains the reference masks in the memory bank, i.e.
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
September 2021
Filter pruning is a significant feature selection technique to shrink the existing feature fusion schemes (especially on convolution calculation and model size), which helps to develop more efficient feature fusion models while maintaining state-of-the-art performance. In addition, it reduces the storage and computation requirements of deep neural networks (DNNs) and accelerates the inference process dramatically. Existing methods mainly rely on manual constraints such as normalization to select the filters.
View Article and Find Full Text PDF