In contemporary industrial systems, the prediction of remaining useful life (RUL) is recognized as a valuable maintenance strategy for health management due to its ability to monitor equipment operational status in real time and ensure the safety of industrial production. Current studies have largely concentrated on deep learning (DL) techniques, leading to a shortage of RUL prediction methods that utilize deep reinforcement learning (DRL). To further enhance application and research, this paper introduces a novel approach to RUL prediction based on DRL, specifically using a combination of Convolutional Neural Network-Bidirectional Long Short-Term Memory Network (CNN-BiLSTM) and the Deep Deterministic Policy Gradient (DDPG) algorithm. The proposed method reframes the conventional task of estimating RUL as a Markov decision process (MDP), effectively integrating the feature extraction capabilities of DL with the decision-making abilities of DRL. Initially, a hybrid CNN-BiLSTM is employed to establish an agent that can extract degradation features from raw signals. Subsequently, the DDPG algorithm within DRL is leveraged to develop the RUL prediction mechanism, completing the MDP by defining appropriate action spaces and reward functions. The agent, through repeated trials and optimization, learns to map the current operational state of the rolling bearing to its remaining service life. Validation analysis was performed on the intelligent maintenance systems (IMS) bearing dataset. The findings suggest that the DRL-based approach outperforms the current methodologies, demonstrating a superior performance in root mean square error (MSE) and MSE metrics. The predicted outcomes align more closely with the actual lifespan values.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1063/5.0225277 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!