We model the vague-to-crisp dynamics of forming percepts in the brain by combining two methodologies: dynamic logic (DL) and operant learning process. Forming percepts upon the presentation of visual inputs is likened to model selection based on sampled evidence. Our framework utilizes the DL in selecting the correct "percept" among competing ones, but uses an intrinsic reward mechanism to allow stochastic online update in lieu of performing the optimization step of the DL framework.
View Article and Find Full Text PDFBasic mechanisms of the mind, cognition, language, its semantic and emotional mechanisms are modeled using dynamic logic (DL). This cognitively and mathematically motivated model leads to a dual-model hypothesis of language and cognition. The paper emphasizes that abstract cognition cannot evolve without language.
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
November 2012
This paper considers the unsupervised learning of high-dimensional binary feature vectors representing categorical information. A cognitively inspired framework, referred to as modeling fields theory (MFT), is utilized as the basic methodology. A new MFT-based algorithm, referred to as accelerated maximum a posteriori (MAP), is proposed.
View Article and Find Full Text PDFConscious and unconscious brain mechanisms, including cognition, emotions and language are considered in this review. The fundamental mechanisms of cognition include interactions between bottom-up and top-down signals. The modeling of these interactions since the 1960s is briefly reviewed, analyzing the ubiquitous difficulty: incomputable combinatorial complexity (CC).
View Article and Find Full Text PDFThe issue of how children learn the meaning of words is fundamental to developmental psychology. The recent attempts to develop or evolve efficient communication protocols among interacting robots or virtual agents have brought that issue to a central place in more applied research fields, such as computational linguistics and neural networks, as well. An attractive approach to learning an object-word mapping is the so-called cross-situational learning.
View Article and Find Full Text PDFCellular simultaneous recurrent neural network (SRN) has been shown to be a function approximator more powerful than the multilayer perceptron (MLP). This means that the complexity of MLP would be prohibitively large for some problems while SRN could realize the desired mapping with acceptable computational constraints. The speed of training of complex recurrent networks is crucial to their successful application.
View Article and Find Full Text PDF