Associative learning is investigated using neural networks and concepts based on learning automata. The behavior of a single decision-maker containing a neural network is studied in a random environment using reinforcement learning. The objective is to determine the optimal action corresponding to a particular state. Since decisions have to be made throughout the context space based on a countable number of experiments, generalization is inevitable. Many different approaches can be followed to generate the desired discriminant function. Three different methods which use neural networks are discussed and compared. In the most general method, the output of the network determines the probability with which one of the actions is to be chosen. The weights of the network are updated on the basis of the actions and the response of the environment. The extension of similar concepts to decentralized decision-making in a context space is also introduced. Simulation results are included. Modifications in the implementations of the most general method to make it practically viable are also presented. All the methods suggested are feasible and the choice of a specific method depends on the accuracy desired as well as on the available computational power.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/72.80288 | DOI Listing |
World J Clin Oncol
January 2025
Department of Obstetrics and Gynecology, Peking University People's Hospital, Beijing 100044, China.
Background: Mitochondrial genes are involved in tumor metabolism in ovarian cancer (OC) and affect immune cell infiltration and treatment responses.
Aim: To predict prognosis and immunotherapy response in patients diagnosed with OC using mitochondrial genes and neural networks.
Methods: Prognosis, immunotherapy efficacy, and next-generation sequencing data of patients with OC were downloaded from The Cancer Genome Atlas and Gene Expression Omnibus.
Prog Addit Manuf
July 2024
Empa Swiss Federal Laboratories for Materials Science and Technology, Überlandstrasse 129, 8600 Dübendorf, Switzerland.
Fast and accurate representation of heat transfer in laser powder-bed fusion of metals (PBF-LB/M) is essential for thermo-mechanical analyses. As an example, it benefits the detection of thermal hotspots at the design stage. While traditional physics-based numerical approaches such as the finite element (FE) method are applicable to a wide variety of problems, they are computationally too expensive for PBF-LB/M due to the space- and time-discretization requirements.
View Article and Find Full Text PDFBiol Psychiatry Glob Open Sci
March 2025
Department of Psychiatry, Vagelos College of Physicians and Surgeons, Columbia University, New York, New York.
Background: Irritability affects up to 20% of youth and is a primary reason for referral to pediatric mental health clinics. Irritability is thought to be associated with disruptions in processing of reward, threat, and cognitive control; however, empirical study of these associations at both the behavioral and neural level have yielded equivocal findings that may be driven by small sample sizes and differences in study design. Associations between irritability and brain connectivity between cognitive control and reward- or threat-processing circuits remain understudied.
View Article and Find Full Text PDFFront Neurosci
January 2025
Neurology Associate P.C., Lincoln, NE, United States.
Introduction: As a hallmark feature of amyotrophic lateral sclerosis (ALS), bulbar involvement significantly impacts psychosocial, emotional, and physical health. A validated objective marker is however lacking to characterize and phenotype bulbar involvement, positing a major barrier to early detection, progress monitoring, and tailored care. This study aimed to bridge this gap by constructing a multiplex functional mandibular muscle network to provide a novel objective measurement tool of bulbar involvement.
View Article and Find Full Text PDFIntroduction: Artificial intelligence and neuroimaging enable accurate dementia prediction, but 'black box' models can be difficult to trust. Explainable artificial intelligence (XAI) describes techniques to understand model behaviour and the influence of features, however deciding which method is most appropriate is non-trivial. Vision transformers (ViT) have also gained popularity, providing a self-explainable, alternative to traditional convolutional neural networks (CNN).
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!