Multilayer Perceptron Network Optimization for Chaotic Time Series Modeling.

Entropy (Basel)

Key Laboratory of Symbol Computation and Knowledge Engineering of the Ministry of Education, College of Computer Science and Technology, Jilin University, 2699 Qianjin Street, Changchun 130012, China.

Published: June 2023

AI Article Synopsis

  • Chaotic time series are challenging to predict due to their randomness, nonlinearity, and unpredictability, making high-precision forecasting difficult.
  • This paper introduces a generalized degree of freedom approximation method for multi-layer perceptron (MLP) networks, along with an Akachi information criterion as a loss function for training models.
  • The proposed method has been tested on both artificial and real-world chaotic time series, demonstrating its effectiveness in model selection and achieving strong performance in multi-step predictions.

Article Abstract

Chaotic time series are widely present in practice, but due to their characteristics-such as internal randomness, nonlinearity, and long-term unpredictability-it is difficult to achieve high-precision intermediate or long-term predictions. Multi-layer perceptron (MLP) networks are an effective tool for chaotic time series modeling. Focusing on chaotic time series modeling, this paper presents a generalized degree of freedom approximation method of MLP. We then obtain its Akachi information criterion, which is designed as the loss function for training, hence developing an overall framework for chaotic time series analysis, including phase space reconstruction, model training, and model selection. To verify the effectiveness of the proposed method, it is applied to two artificial chaotic time series and two real-world chaotic time series. The numerical results show that the proposed optimized method is effective to obtain the best model from a group of candidates. Moreover, the optimized models perform very well in multi-step prediction tasks.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10378385PMC
http://dx.doi.org/10.3390/e25070973DOI Listing

Publication Analysis

Top Keywords

chaotic time
28
time series
28
series modeling
12
chaotic
7
time
7
series
7
multilayer perceptron
4
perceptron network
4
network optimization
4
optimization chaotic
4

Similar Publications

To address the challenges related to active power dissipation and node voltage fluctuation in the practical transformation of power grids in the field of new energy such as wind and photovoltaic power generation, an improved Dung Beetle Optimization Algorithm Based on a Hybrid Strategy of Levy Flight and Differential Evolution (LDEDBO) is proposed. This paper systematically addresses this issue from three aspects: firstly, optimizing the DBO algorithm using Chebyshev chaotic mapping, Levy flight strategy, and differential evolution algorithm; secondly, validating the algorithm's feasibility through real-time network reconfiguration at random time points within a 24-h period; and finally, applying the LDEDBO to address the dynamic reconfiguration problems of the IEEE-33 and IEEE-69 node bus. The simulation indicates that the power dissipation of the IEEE-33 node bus is decreased by 28.

View Article and Find Full Text PDF

Introduction: Proactive esophageal cooling reduces injury during radiofrequency (RF) ablation of the left atrium (LA) for the treatment of atrial fibrillation (AF). New catheters are capable of higher wattage settings up to 90 W (very high-power short duration, vHPSD) for 4s. Varying power and duration however does not eliminate the risk of thermal injury.

View Article and Find Full Text PDF

Chaotic property in general fractional calculus.

Chaos

December 2024

Institute of Mathematics, National Academy of Sciences of Ukraine, Tereshchenkivska 3, Kiev 01024, Ukraine.

We prove the chaos property, in the sense of Devaney, of the discrete-time fractional derivative understood in the framework of general fractional calculus. The latter means the discretization of a differential-convolution operator whose kernel has the Laplace transform belonging to the Stieltjes class.

View Article and Find Full Text PDF

How neural networks work: Unraveling the mystery of randomized neural networks for functions and chaotic dynamical systems.

Chaos

December 2024

Department of Electrical and Computer Engineering, the Clarkson Center for Complex Systems Science, Clarkson University, Potsdam, New York 13699, USA.

Artificial Neural Networks (ANNs) have proven to be fantastic at a wide range of machine learning tasks, and they have certainly come into their own in all sorts of technologies that are widely consumed today in society as a whole. A basic task of machine learning that neural networks are well suited to is supervised learning, including when learning orbits from time samples of dynamical systems. The usual construct in ANN is to fully train all of the perhaps many millions of parameters that define the network architecture.

View Article and Find Full Text PDF

Magnetic structures in the explicitly time-dependent nontwist map.

Chaos

December 2024

Department of Atomic Physics, Eötvös Loránd University, 1117 Pázmány Péter sétány 1A, Budapest, Hungary.

We investigate how the magnetic structures of the plasma change in a large aspect ratio tokamak perturbed by an ergodic magnetic limiter, when a system parameter is non-adiabatically varied in time. We model such a scenario by considering the Ullmann-Caldas nontwist map, where we introduce an explicit time-dependence to the ratio of the limiter and plasma currents. We apply the tools developed recently in the field of chaotic Hamiltonian systems subjected to parameter drift.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!