Publications by authors named "Mate Lengyel"

High tissue density of the mammary gland is considered a pro-tumorigenic factor, hence suppressing the stimuli that induce matrix buildup carries the potential for cancer interception. We found that in non-malignant mammary epithelial cells the combination of the chemopreventive agents bexarotene (Bex) and carvedilol (Carv) suppresses the zymogen granule protein 16B (ZG16B, PAUF) through an interaction of ARID1A with a proximal enhancer. Bex + Carv also reduced ZG16B levels in vivo in normal breast tissue and MDA-MB231 tumor xenografts.

View Article and Find Full Text PDF

Working memory (WM) is a fundamental aspect of cognition. WM maintenance is classically thought to rely on stable patterns of neural activities. However, recent evidence shows that neural population activities during WM maintenance undergo dynamic variations before settling into a stable pattern.

View Article and Find Full Text PDF

Sub-additivity and variability are ubiquitous response motifs in the primary visual cortex (V1). Response sub-additivity enables the construction of useful interpretations of the visual environment, whereas response variability indicates the factors that limit the precision with which the brain can do this. There is increasing evidence that experimental manipulations that elicit response sub-additivity often also quench response variability.

View Article and Find Full Text PDF

Working memory involves the short-term maintenance of information and is critical in many tasks. The neural circuit dynamics underlying working memory remain poorly understood, with different aspects of prefrontal cortical (PFC) responses explained by different putative mechanisms. By mathematical analysis, numerical simulations, and using recordings from monkey PFC, we investigate a critical but hitherto ignored aspect of working memory dynamics: information loading.

View Article and Find Full Text PDF

The input-output transformation of individual neurons is a key building block of neural circuit dynamics. While previous models of this transformation vary widely in their complexity, they all describe the underlying functional architecture as unitary, such that each synaptic input makes a single contribution to the neuronal response. Here, we show that the input-output transformation of CA1 pyramidal cells is instead best captured by two distinct functional architectures operating in parallel.

View Article and Find Full Text PDF

Flexible behavior requires the creation, updating, and expression of memories to depend on context. While the neural underpinnings of each of these processes have been intensively studied, recent advances in computational modeling revealed a key challenge in context-dependent learning that had been largely ignored previously: Under naturalistic conditions, context is typically uncertain, necessitating contextual inference. We review a theoretical approach to formalizing context-dependent learning in the face of contextual uncertainty and the core computations it requires.

View Article and Find Full Text PDF

Variations in the geometry of the environment, such as the shape and size of an enclosure, have profound effects on navigational behavior and its neural underpinning. Here, we show that these effects arise as a consequence of a single, unifying principle: to navigate efficiently, the brain must maintain and update the uncertainty about one's location. We developed an image-computable Bayesian ideal observer model of navigation, continually combining noisy visual and self-motion inputs, and a neural encoding model optimized to represent the location uncertainty computed by the ideal observer.

View Article and Find Full Text PDF

Context is widely regarded as a major determinant of learning and memory across numerous domains, including classical and instrumental conditioning, episodic memory, economic decision-making, and motor learning. However, studies across these domains remain disconnected due to the lack of a unifying framework formalizing the concept of context and its role in learning. Here, we develop a unified vernacular allowing direct comparisons between different domains of contextual learning.

View Article and Find Full Text PDF

Therapeutic targets in cancer cells defective for the tumor suppressor ARID1A are fundamentals of synthetic lethal strategies. However, whether modulating ARID1A function in premalignant breast epithelial cells could be exploited to reduce carcinogenic potential remains to be elucidated. In search of chromatin-modulating mechanisms activated by anti-proliferative agents in normal breast epithelial (HME-hTert) cells, we identified a distinct pattern of genome-wide H3K27 histone acetylation marks characteristic for the combined treatment by the cancer preventive rexinoid bexarotene (Bex) and carvedilol (Carv).

View Article and Find Full Text PDF

Sequential activity reflecting previously experienced temporal sequences is considered a hallmark of learning across cortical areas. However, it is unknown how cortical circuits avoid the converse problem: producing spurious sequences that are not reflecting sequences in their inputs. We develop methods to quantify and study sequentiality in neural responses.

View Article and Find Full Text PDF

Recent breakthroughs in artificial intelligence (AI) have enabled machines to plan in tasks previously thought to be uniquely human. Meanwhile, the planning algorithms implemented by the brain itself remain largely unknown. Here, we review neural and behavioral data in sequential decision-making tasks that elucidate the ways in which the brain does-and does not-plan.

View Article and Find Full Text PDF

Humans spend a lifetime learning, storing and refining a repertoire of motor memories. For example, through experience, we become proficient at manipulating a large range of objects with distinct dynamical properties. However, it is unknown what principle underlies how our continuous stream of sensorimotor experience is segmented into separate memories and how we adapt and use this growing repertoire.

View Article and Find Full Text PDF

Perception is often described as probabilistic inference requiring an internal representation of uncertainty. However, it is unknown whether uncertainty is represented in a task-dependent manner, solely at the level of decisions, or in a fully Bayesian manner, across the entire perceptual pathway. To address this question, we first codify and evaluate the possible strategies the brain might use to represent uncertainty, and highlight the normative advantages of fully Bayesian representations.

View Article and Find Full Text PDF

Sensory cortices display a suite of ubiquitous dynamical features, such as ongoing noise variability, transient overshoots and oscillations, that have so far escaped a common, principled theoretical account. We developed a unifying model for these phenomena by training a recurrent excitatory-inhibitory neural circuit model of a visual cortical hypercolumn to perform sampling-based probabilistic inference. The optimized network displayed several key biological properties, including divisive normalization and stimulus-modulated noise variability, inhibition-dominated transients at stimulus onset and strong gamma oscillations.

View Article and Find Full Text PDF

An important computational goal of the visual system is 'representational untangling' (RU): representing increasingly complex features of visual scenes in an easily decodable format. RU is typically assumed to be achieved in high-level visual cortices via several stages of cortical processing. Here we show, using a canonical population coding model, that RU of low-level orientation information is already performed at the first cortical stage of visual processing, but not before that, by a fundamental cellular-level property: the thresholded firing rate nonlinearity of simple cells in the primary visual cortex (V1).

View Article and Find Full Text PDF

The concept of objects is fundamental to cognition and is defined by a consistent set of sensory properties and physical affordances. Although it is unknown how the abstract concept of an object emerges, most accounts assume that visual or haptic boundaries are crucial in this process. Here, we tested an alternative hypothesis that boundaries are not essential but simply reflect a more fundamental principle: consistent visual or haptic statistical properties.

View Article and Find Full Text PDF

Dendrites integrate inputs nonlinearly, but it is unclear how these nonlinearities contribute to the overall input-output transformation of single neurons. We developed statistically principled methods using a hierarchical cascade of linear-nonlinear subunits (hLN) to model the dynamically evolving somatic response of neurons receiving complex, in vivo-like spatiotemporal synaptic input patterns. We used the hLN to predict the somatic membrane potential of an in vivo-validated detailed biophysical model of a L2/3 pyramidal cell.

View Article and Find Full Text PDF

A key component of interacting with the world is how to direct ones' sensors so as to extract task-relevant information - a process referred to as active sensing. In this review, we present a framework for active sensing that forms a closed loop between an ideal observer, that extracts task-relevant information from a sequence of observations, and an ideal planner which specifies the actions that lead to the most informative observations. We discuss active sensing as an approximation to exploration in the wider framework of reinforcement learning, and conversely, discuss several sensory, perceptual, and motor processes as approximations to active sensing.

View Article and Find Full Text PDF

Correlated variability in cortical activity is ubiquitously quenched following stimulus onset, in a stimulus-dependent manner. These modulations have been attributed to circuit dynamics involving either multiple stable states ("attractors") or chaotic activity. Here we show that a qualitatively different dynamical regime, involving fluctuations about a single, stimulus-driven attractor in a loosely balanced excitatory-inhibitory network (the stochastic "stabilized supralinear network"), best explains these modulations.

View Article and Find Full Text PDF

Two theoretical ideas have emerged recently with the ambition to provide a unifying functional explanation of neural population coding and dynamics: predictive coding and Bayesian inference. Here, we describe the two theories and their combination into a single framework: Bayesian predictive coding. We clarify how the two theories can be distinguished, despite sharing core computational concepts and addressing an overlapping set of empirical phenomena.

View Article and Find Full Text PDF