Publications by authors named "Mufeng Tang"

Article Synopsis
  • The brain struggles to interpret constantly changing stimuli, and the theory of predictive coding serves as a framework for understanding how perception works, but questions about temporal predictive coding still need addressing.
  • This research presents a temporal predictive coding model that can be effectively implemented in recurrent networks, relying on local inputs and simple learning rules, while achieving similar performance to a Kalman filter without complex calculations.
  • The model produces motion-sensitive receptive fields that match those found in real neurons, demonstrating potential applications for understanding neural circuits responsible for predicting future stimuli in dynamic environments.
View Article and Find Full Text PDF
Article Synopsis
  • The hippocampus plays a critical role in associative memory tasks, with recent theories linking its predictive coding mechanisms to these memory processes.
  • A new computational model based on hierarchical predictive networks was developed to better reflect the recurrent connections found in the CA3 region of the hippocampus, which are important for associative memory.
  • The proposed models learn covariance information implicitly and are numerically stable, offering a more biologically accurate framework for understanding hippocampal memory formation and its interactions with the neocortex.
View Article and Find Full Text PDF

Forming accurate memory of sequential stimuli is a fundamental function of biological agents. However, the computational mechanism underlying sequential memory in the brain remains unclear. Inspired by neuroscience theories and recent successes in applying predictive coding (PC) to memory tasks, in this work we propose a novel PC-based model for memory, called (tPC).

View Article and Find Full Text PDF

We develop biologically plausible training mechanisms for self-supervised learning (SSL) in deep networks. Specifically, by biologically plausible training we mean (i) all updates of weights are based on activities of pre-synaptic units and current, or activity retrieved from short term memory of post synaptic units, including at the top-most error computing layer, (ii) complex computations such as normalization, inner products and division are avoided, (iii) asymmetric connections between units, and (iv) most learning is carried out in an unsupervised manner. SSL with a contrastive loss satisfies the third condition as it does not require labeled data and it introduces robustness to observed perturbations of objects, which occur naturally as objects or observers move in 3D and with variable lighting over time.

View Article and Find Full Text PDF