Free-running recurrent neural networks (RNNs), especially probabilistic models, generate an ongoing information flux that can be quantified with the mutual information I[x→(t),x→(t+1)] between subsequent system states x→. Although previous studies have shown that I depends on the statistics of the network's connection weights, it is unclear how to maximize I systematically and how to quantify the flux in large systems where computing the mutual information becomes intractable. Here, we address these questions using Boltzmann machines as model systems. We find that in networks with moderately strong connections, the mutual information I is approximately a monotonic transformation of the root-mean-square averaged Pearson correlations between neuron pairs, a quantity that can be efficiently computed even in large systems. Furthermore, evolutionary maximization of I[x→(t),x→(t+1)] reveals a general design principle for the weight matrices enabling the systematic construction of systems with a high spontaneous information flux. Finally, we simultaneously maximize information flux and the mean period length of cyclic attractors in the state-space of these dynamical networks. Our results are potentially useful for the construction of RNNs that serve as short-time memories or pattern generators.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1162/neco_a_01651 | DOI Listing |
Sci Rep
December 2024
Department of Mathematics, University of Gujrat, Gujrat, 50700, Pakistan.
This study is the application of a recurrent neural networks with Bayesian regularization optimizer (RNNs-BRO) to analyze the effect of various physical parameters on fluid velocity, temperature, and mass concentration profiles in the Darcy-Forchheimer flow of propylene glycol mixed with carbon nanotubes model across a stretched cylinder. This model has significant applications in thermal systems such as in heat exchangers, chemical processing, and medical cooling devices. The data-set of the proposed model has been generated with variation of various parameters such as, curvature parameter, inertia coefficient, Hartmann number, porosity parameter, Eckert number, Prandtl number, radiation parameter, activation energy variable, Schmidt number and reaction rate parameter for different scenarios.
View Article and Find Full Text PDFSci Rep
December 2024
School of Mechanical and Electrical Engineering, Qiqihar University, Qiqihar, 161006, China.
A prediction model of the pig house environment based on Bayesian optimization (BO), squeeze and excitation block (SE), convolutional neural network (CNN) and gated recurrent unit (GRU) is proposed to improve the prediction accuracy and animal welfare and take control measures in advance. To ensure the optimal model configuration, the model uses a BO algorithm to fine-tune hyper-parameters, such as the number of GRUs, initial learning rate and L2 normal form regularization factor. The environmental data are fed into the SE-CNN block, which extracts the local features of the data through convolutional operations.
View Article and Find Full Text PDFSci Rep
December 2024
Knight Foundation School of Computing and Information Sciences, Florida International University, Miami, USA.
Groundwater monitoring is a crucial part of groundwater remediation that produces data from various strategically placed wells to maintain a water quality standard. Using the United States Department of Energy's Hanford 100-HRD area well data, recurrent neural networks are trained in the form of one-dimensional Convolutional Neural Networks (CNNs), Long Short Term Memory (LSTM) networks, and Dual-stage Attention-based LSTM (DA-LSTM) networks to reduce monitoring costs and increase data sampling responsiveness that is subject to laboratory analysis delays, with the best network being DA-LSTM achieving an R score of 0.82.
View Article and Find Full Text PDFSci Rep
December 2024
College of Electronic and Information Engineering, Guangdong Ocean University, ZhanJiang, 524088, China.
In the context of social networks becoming primary platforms for information dissemination and public discourse, understanding how opinions compete and reach consensus has become increasingly vital. This paper introduces a novel distributed competition model designed to elucidate the dynamics of opinion competitive behavior in social networks. The proposed model captures the development mechanism of various opinions, their appeal to individuals, and the impact of the social environment on their evolution.
View Article and Find Full Text PDFNeural Netw
December 2024
School of Mathematics and Statistics, Yili Normal University, Yining 835000, China.
In this paper, a recurrent neural network is proposed for distributed nonconvex optimization subject to globally coupled (in)equality constraints and local bound constraints. Two distributed optimization models, including a resource allocation problem and a consensus-constrained optimization problem, are established, where the objective functions are not necessarily convex, or the constraints do not guarantee a convex feasible set. To handle the nonconvexity, an augmented Lagrangian function is designed, based on which a recurrent neural network is developed for solving the optimization models in a distributed manner, and the convergence to a local optimal solution is proven.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!