Publications by authors named "Bradley C Love"

Scientific discoveries often hinge on synthesizing decades of research, a task that potentially outstrips human information processing capacities. Large language models (LLMs) offer a solution. LLMs trained on the vast scientific literature could potentially integrate noisy yet interrelated findings to forecast novel results better than human experts.

View Article and Find Full Text PDF
Article Synopsis
  • Humans and machines often learn without direct feedback or supervision, relying heavily on unsupervised data.
  • There is debate around whether unsupervised learning is beneficial for humans, with mixed empirical results suggesting that self-reinforcement of predictions can be advantageous or detrimental based on the alignment of those predictions with the task.
  • The authors propose a framework to explain these mixed results and offer insights into effective learning strategies relevant to education and lifelong learning.
View Article and Find Full Text PDF
Article Synopsis
  • The Brain Imaging Data Structure (BIDS) is a community-created standard for organizing neuroscience data and metadata, helping researchers manage various modalities efficiently.
  • The paper discusses the evolution of BIDS, including the guiding principles, extension mechanisms, and challenges faced during its development.
  • It also highlights key lessons learned from the BIDS project, aiming to inspire and inform researchers in other fields about effective data organization practices.
View Article and Find Full Text PDF

Cognitive scientists often infer multidimensional representations from data. Whether the data involve text, neuroimaging, neural networks, or human judgments, researchers frequently infer and analyze latent representational spaces (i.e.

View Article and Find Full Text PDF

Categorization requires a balance of mechanisms that can generalize across common features and discriminate against specific details. A growing literature suggests that the hippocampus may accomplish these mechanisms by using fundamental mechanisms like pattern separation, pattern completion, and memory integration. Here, we assessed the role of the rodent dorsal hippocampus (HPC) in category learning by combining inhibitory DREADDs (Designer Receptors Exclusively Activated by Designer Drugs) and simulations using a neural network model.

View Article and Find Full Text PDF

Unlabelled: Learning systems must constantly decide whether to create new representations or update existing ones. For example, a child learning that a bat is a mammal and not a bird would be best served by creating a new representation, whereas updating may be best when encountering a second similar bat. Characterizing the neural dynamics that underlie these complementary memory operations requires identifying the exact moments when each operation occurs.

View Article and Find Full Text PDF

An incomplete science begets imperfect models. Nevertheless, the target article advocates for jettisoning deep-learning models with some competency in object recognition for toy models evaluated against a checklist of laboratory findings; an approach which evokes Alan Newell's 20 questions critique. We believe their approach risks incoherency and neglects the most basic test; can the model perform its intended task.

View Article and Find Full Text PDF

Despite their impressive performance in object recognition and other tasks under standard testing conditions, deep networks often fail to generalize to out-of-distribution (o.o.d.

View Article and Find Full Text PDF

Whether supervised or unsupervised, human and machine learning is usually characterized as event-based. However, learning may also proceed by systems alignment in which mappings are inferred between entire systems, such as visual and linguistic systems. Systems alignment is possible because items that share similar visual contexts, such as a car and a truck, will also tend to share similar linguistic contexts.

View Article and Find Full Text PDF

Background: Machine learning (ML) approaches are a crucial component of modern data analysis in many fields, including epidemiology and medicine. Nonlinear ML methods often achieve accurate predictions, for instance, in personalized medicine, as they are capable of modeling complex relationships between features and the target. Problematically, ML models and their predictions can be biased by confounding information present in the features.

View Article and Find Full Text PDF
Article Synopsis
  • The Brain Imaging Data Structure (BIDS) is a collaborative standard designed to organize various neuroscience data and metadata.
  • The paper details the history, principles, and mechanisms behind the development and expansion of BIDS, alongside the challenges it faces as it evolves.
  • It also shares lessons learned from the project to help researchers in other fields apply similar successful strategies.
View Article and Find Full Text PDF

Similarity and categorization are fundamental processes in human cognition that help complex organisms make sense of the cacophony of information in their environment. These processes are critical for tasks such as recognizing objects, making decisions, and forming memories. In this review, we provide an overview of the current state of knowledge on similarity and psychological spaces, discussing the theories, methods, and empirical findings that have been generated over the years.

View Article and Find Full Text PDF

A complete neuroscience requires multilevel theories that address phenomena ranging from higher-level cognitive behaviors to activities within a cell. We propose an extension to the level of mechanism approach where a computational model of cognition sits in between behavior and brain: It explains the higher-level behavior and can be decomposed into lower-level component mechanisms to provide a richer understanding of the system than any level alone. Toward this end, we decomposed a cognitive model into neuron-like units using a neural flocking approach that parallels recurrent hippocampal activity.

View Article and Find Full Text PDF

Categorization is an adaptive cognitive function that allows us to generalize knowledge to novel situations. Converging evidence from neuropsychological, neuroimaging, and neurophysiological studies suggest that categorization is mediated by the basal ganglia; however, there is debate regarding the necessity of each subregion of the basal ganglia and their respective functions. The current experiment examined the roles of the dorsomedial striatum (DMS; homologous to the head of the caudate nucleus) and dorsolateral striatum (DLS; homologous to the body and tail of the caudate nucleus) in category learning by combining selective lesions with computational modeling.

View Article and Find Full Text PDF

Functional correspondences between deep convolutional neural networks (DCNNs) and the mammalian visual system support a hierarchical account in which successive stages of processing contain ever higher-level information. However, these correspondences between brain and model activity involve shared, not task-relevant, variance. We propose a stricter account of correspondence: If a DCNN layer corresponds to a brain region, then replacing model activity with brain activity should successfully drive the DCNN's object recognition decision.

View Article and Find Full Text PDF

Recent findings suggest conceptual relationships hold across modalities. For instance, if two concepts occur in similar linguistic contexts, they also likely occur in similar visual contexts. These similarity structures may provide a valuable signal for alignment when learning to map between domains, such as when learning the names of objects.

View Article and Find Full Text PDF

Replay can consolidate memories through offline neural reactivation related to past experiences. Category knowledge is learned across multiple experiences, and its subsequent generalization is promoted by consolidation and replay during rest and sleep. However, aspects of replay are difficult to determine from neuroimaging studies.

View Article and Find Full Text PDF

Whether adding songs to a playlist or groceries during an online shop, how do we decide what to choose next? We develop a model that predicts such open-ended, sequential choices using a process of cued retrieval from long-term memory. Using the past choice to cue subsequent retrievals, this model predicts the sequential purchases and response times of nearly 5 million grocery purchases made by more than 100,000 online shoppers. Products can be associated in different ways, such as by their episodic association or semantic overlap, and we find that consumers query multiple forms of associative knowledge when retrieving options.

View Article and Find Full Text PDF

Humans continuously categorise inputs, but only rarely receive explicit feedback as to whether or not they are correct. This implies that they may be integrating unsupervised information together with their sparse supervised data - a form of semi-supervised learning. However, experiments testing semi-supervised learning are rare, and are bedevilled with conflicting results about whether the unsupervised information affords any benefit.

View Article and Find Full Text PDF

Induction benefits from useful priors. Penalized regression approaches, like ridge regression, shrink weights toward zero but zero association is usually not a sensible prior. Inspired by simple and robust decision heuristics humans use, we constructed non-zero priors for penalized regression models that provide robust and interpretable solutions across several tasks.

View Article and Find Full Text PDF

People deploy top-down, goal-directed attention to accomplish tasks, such as finding lost keys. By tuning the visual system to relevant information sources, object recognition can become more efficient (a benefit) and more biased toward the target (a potential cost). Motivated by selective attention in categorisation models, we developed a goal-directed attention mechanism that can process naturalistic (photographic) stimuli.

View Article and Find Full Text PDF

Category learning groups stimuli according to similarity or function. This involves finding and attending to stimulus features that reliably inform category membership. Although many of the neural mechanisms underlying categorization remain elusive, models of human category learning posit that prefrontal cortex plays a substantial role.

View Article and Find Full Text PDF

Contemporary models of categorization typically tend to sidestep the problem of how information is initially encoded during decision making. Instead, a focus of this work has been to investigate how, through selective attention, stimulus representations are "contorted" such that behaviorally relevant dimensions are accentuated (or "stretched"), and the representations of irrelevant dimensions are ignored (or "compressed"). In high-dimensional real-world environments, it is computationally infeasible to sample all available information, and human decision makers selectively sample information from sources expected to provide relevant information.

View Article and Find Full Text PDF

Hebart et al. recently analysed 1.5 million human similarity judgments and found that natural objects are described by a small set of interpretable dimensions.

View Article and Find Full Text PDF

For decades, researchers have debated whether mental representations are symbolic or grounded in sensory inputs and motor programs. Certainly, aspects of mental representations are grounded. However, does the brain also contain abstract concept representations that mediate between perception and action in a flexible manner not tied to the details of sensory inputs and motor programs? Such conceptual pointers would be useful when concepts remain constant despite changes in appearance and associated actions.

View Article and Find Full Text PDF