Understanding how visual information is encoded in biological and artificial systems often requires the generation of appropriate stimuli to test specific hypotheses, but available methods for video generation are scarce. Here we introduce the spatiotemporal style transfer (STST) algorithm, a dynamic visual stimulus generation framework that allows the manipulation and synthesis of video stimuli for vision research. We show how stimuli can be generated that match the low-level spatiotemporal features of their natural counterparts, but lack their high-level semantic features, providing a useful tool to study object recognition. We used these stimuli to probe PredNet, a predictive coding deep network, and found that its next-frame predictions were not disrupted by the omission of high-level information, with human observers also confirming the preservation of low-level features and lack of high-level information in the generated stimuli. We also introduce a procedure for the independent spatiotemporal factorization of dynamic stimuli. Testing such factorized stimuli on humans and deep vision models suggests a spatial bias in how humans and deep vision models encode dynamic visual information. These results showcase potential applications of the STST algorithm as a versatile tool for dynamic stimulus generation in vision science.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1038/s43588-024-00746-w | DOI Listing |
J Cogn Neurosci
January 2025
Queen's University, Kingston, Ontario, Canada.
Pupil responses are commonly used to provide insight into visual perception, autonomic control, cognition, and various brain disorders. However, making inferences from pupil data can be complicated by nonlinearities in pupil dynamics and variability within and across individuals, which challenge the assumptions of linearity or group-level homogeneity required for common analysis methods. In this study, we evaluated luminance evoked pupil dynamics in young healthy adults (n = 10, M:F = 5:5, ages 19-25 years) by identifying nonlinearities, variability, and conserved relationships across individuals to improve the ability to make inferences from pupil data.
View Article and Find Full Text PDFElife
January 2025
Department of Neuroscience, Physiology and Pharmacology, University College London, London, United Kingdom.
Data-driven models of neurons and circuits are important for understanding how the properties of membrane conductances, synapses, dendrites, and the anatomical connectivity between neurons generate the complex dynamical behaviors of brain circuits in health and disease. However, the inherent complexity of these biological processes makes the construction and reuse of biologically detailed models challenging. A wide range of tools have been developed to aid their construction and simulation, but differences in design and internal representation act as technical barriers to those who wish to use data-driven models in their research workflows.
View Article and Find Full Text PDFPlant Cell
January 2025
State Key Laboratory of Protein and Plant Gene Research, School of Life Sciences, Peking University, Beijing 100871, China.
Tracheary elements (TEs) are vital in the transport of various substances and contribute to plant growth. The differentiation of TEs is complex and regulated by a variety of microRNAs (miRNAs). However, the dynamic changes in miRNAs during each stage of TE differentiation remain unclear, and the miRNA regulatory network is not yet complete.
View Article and Find Full Text PDFElife
January 2025
Department of Psychology, Queens University, Kingston, Canada.
Movie-watching is a central aspect of our lives and an important paradigm for understanding the brain mechanisms behind cognition as it occurs in daily life. Contemporary views of ongoing thought argue that the ability to make sense of events in the 'here and now' depend on the neural processing of incoming sensory information by auditory and visual cortex, which are kept in check by systems in association cortex. However, we currently lack an understanding of how patterns of ongoing thoughts map onto the different brain systems when we watch a film, partly because methods of sampling experience disrupt the dynamics of brain activity and the experience of movie-watching.
View Article and Find Full Text PDFJ Phys Chem B
January 2025
Department of Physiology and Biophysics, Weill Cornell Medical College, New York, New York 10065, United States.
ModeHunter is a modular Python software package for the simulation of 3D biophysical motion across spatial resolution scales using modal analysis of elastic networks. It has been curated from our in-house Python scripts over the last 15 years, with a focus on detecting similarities of elastic motion between atomic structures, coarse-grained graphs, and volumetric data obtained from biophysical or biomedical imaging origins, such as electron microscopy or tomography. With ModeHunter, normal modes of biophysical motion can be analyzed with various static visualization techniques or brought to life by dynamics animation in terms of single or multimode trajectories or decoy ensembles.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!