The respiratory system dynamic is of high significance when it comes to the detection of lung abnormalities, which highlights the importance of presenting a reliable model for it. In this paper, we introduce a novel dynamic modelling method for the characterization of the lung sounds (LS), based on the attractor recurrent neural network (ARNN). The ARNN structure allows the development of an effective LS model. Additionally, it has the capability to reproduce the distinctive features of the lung sounds using its formed attractors. Furthermore, a novel ARNN topology based on fuzzy functions (FFs-ARNN) is developed. Given the utility of the recurrent quantification analysis (RQA) as a tool to assess the nature of complex systems, it was used to evaluate the performance of both the ARNN and the FFs-ARNN models. The experimental results demonstrate the effectiveness of the proposed approaches for multichannel LS analysis. In particular, a classification accuracy of 91% was achieved using FFs-ARNN with sequences of RQA features.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.compbiomed.2017.03.019DOI Listing

Publication Analysis

Top Keywords

attractor recurrent
8
recurrent neural
8
neural network
8
based fuzzy
8
fuzzy functions
8
effective model
8
lung abnormalities
8
lung sounds
8
network based
4
functions effective
4

Similar Publications

Continuous bump attractor networks (CANs) have been widely used in the past to explain the phenomenology of working memory (WM) tasks in which continuous-valued information has to be maintained to guide future behavior. Standard CAN models suffer from two major limitations: the stereotyped shape of the bump attractor does not reflect differences in the representational quality of WM items and the recurrent connections within the network require a biologically unrealistic level of fine tuning. We address both challenges in a two-dimensional (2D) network model formalized by two coupled neural field equations of Amari type.

View Article and Find Full Text PDF

The integration and interaction of cross-modal senses in brain neural networks can facilitate high-level cognitive functionalities. In this work, we proposed a bioinspired multisensory integration neural network (MINN) that integrates visual and audio senses for recognizing multimodal information across different sensory modalities. This deep learning-based model incorporates a cascading framework of parallel convolutional neural networks (CNNs) for extracting intrinsic features from visual and audio inputs, and a recurrent neural network (RNN) for multimodal information integration and interaction.

View Article and Find Full Text PDF

Continuous Quasi-Attractors dissolve with too much - or too little - variability.

PNAS Nexus

December 2024

SISSA, Scuola Internazionale Superiore di Studi Avanzati, Cognitive Neuroscience, Trieste 34136, Italy.

Recent research involving bats flying in long tunnels has confirmed that hippocampal place cells can be active at multiple locations, with considerable variability in place field size and peak rate. With self-organizing recurrent networks, variability implies inhomogeneity in the synaptic weights, impeding the establishment of a continuous manifold of fixed points. Are continuous attractor neural networks still valid models for understanding spatial memory in the hippocampus, given such variability? Here, we ask what are the noise limits, in terms of an experimentally inspired parametrization of the irregularity of a single map, beyond which the notion of continuous attractor is no longer relevant.

View Article and Find Full Text PDF

Nobel honors for John Hopfield, who ushered attractor dynamics into neuroscience.

Neuron

December 2024

Department of Physics, University of California, San Diego, San Diego, CA 92093, USA; Department of Neurobiology, University of California, San Diego, San Diego, CA 92093, USA. Electronic address:

John Hopfield's model on collective computation linked the recall of memories with interactions and dynamics associated with disordered magnetic systems. Insights from Hopfield's work catalyzed formulations that link the dynamics and emergent properties of recurrently connected generic neurons with the functional properties and signaling observed from brain circuits.

View Article and Find Full Text PDF

Recurrent neural networks (RNNs) are an important class of models for learning sequential behavior. However, training RNNs to learn long-term dependencies is a tremendously difficult task, and this difficulty is widely attributed to the vanishing and exploding gradient (VEG) problem. Since it was first characterized 30 years ago, the belief that if VEG occurs during optimization then RNNs learn long-term dependencies poorly has become a central tenet in the RNN literature and has been steadily cited as motivation for a wide variety of research advancements.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!