Sensors (Basel)
November 2023
Robots are becoming increasingly sophisticated in the execution of complex tasks. However, an area that requires development is the ability to act in dynamically changing environments. To advance this, developments have turned towards understanding the human brain and applying this to improve robotics.
View Article and Find Full Text PDFNetworks of spiking neurons can have persistently firing stable bump attractors to represent continuous spaces (like temperature). This can be done with a topology with local excitatory synapses and local surround inhibitory synapses. Activating large ranges in the attractor can lead to multiple bumps, that show repeller and attractor dynamics; however, these bumps can be merged by overcoming the repeller dynamics.
View Article and Find Full Text PDFThe best way to develop a Turing test passing AI is to follow the human model: an embodied agent that functions over a wide range of domains, is a human cognitive model, follows human neural functioning and learns. These properties will endow the agent with the deep semantics required to pass the test. An embodied agent functioning over a wide range of domains is needed to be exposed to and learn the semantics of those domains.
View Article and Find Full Text PDFHumans process language with their neurons. Memory in neurons is supported by neural firing and by short- and long-term synaptic weight change; the emergent behaviour of neurons, synchronous firing, and cell assembly dynamics is also a form of memory. As the language signal moves to later stages, it is processed with different mechanisms that are slower but more persistent.
View Article and Find Full Text PDFA system with some degree of biological plausibility is developed to categorise items from a widely used machine learning benchmark. The system uses fatiguing leaky integrate and fire neurons, a relatively coarse point model that roughly duplicates biological spiking properties; this allows spontaneous firing based on hypo-fatigue so that neurons not directly stimulated by the environment may be included in the circuit. A novel compensatory Hebbian learning algorithm is used that considers the total synaptic weight coming into a neuron.
View Article and Find Full Text PDFSince the cell assembly (CA) was hypothesised, it has gained substantial support and is believed to be the neural basis of psychological concepts. A CA is a relatively small set of connected neurons, that through neural firing can sustain activation without stimulus from outside the CA, and is formed by learning. Extensive evidence from multiple single unit recording and other techniques provides support for the existence of CAs that have these properties, and that their neurons also spike with some degree of synchrony.
View Article and Find Full Text PDFA neurocomputational model based on emergent massively overlapping neural cell assemblies (CAs) for resolving prepositional phrase (PP) attachment ambiguity is described. PP attachment ambiguity is a well-studied task in natural language processing and is a case where semantics is used to determine the syntactic structure. A large network of biologically plausible fatiguing leaky integrate-and-fire neurons is trained with semantic hierarchies (obtained from WordNet) on sentences with PP attachment ambiguity extracted from the Penn Treebank corpus.
View Article and Find Full Text PDFCogn Neurodyn
December 2009
A natural language parser implemented entirely in simulated neurons is described. It produces a semantic representation based on frames. It parses solely using simulated fatiguing Leaky Integrate and Fire neurons, that are a relatively accurate biological model that is simulated efficiently.
View Article and Find Full Text PDF