To plan movements toward objects our brain must recognize whether retinal displacement is due to self-motion and/or to object-motion. Here, we aimed to test whether motion areas are able to segregate these types of motion. We combined an event-related functional magnetic resonance imaging experiment, brain mapping techniques, and wide-field stimulation to study the responsivity of motion-sensitive areas to pure and combined self- and object-motion conditions during virtual movies of a train running within a realistic landscape. We observed a selective response in MT to the pure object-motion condition, and in medial (PEc, pCi, CSv, and CMA) and lateral (PIC and LOR) areas to the pure self-motion condition. Some other regions (like V6) responded more to complex visual stimulation where both object- and self-motion were present. Among all, we found that some motion regions (V3A, LOR, MT, V6, and IPSmot) could extract object-motion information from the overall motion, recognizing the real movement of the train even when the images remain still (on the screen), or moved, because of self-movements. We propose that these motion areas might be good candidates for the "flow parsing mechanism," that is the capability to extract object-motion information from retinal motion signals by subtracting out the optic flow components.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7267932 | PMC |
http://dx.doi.org/10.1002/hbm.24862 | DOI Listing |
IEEE Trans Pattern Anal Mach Intell
November 2024
Offset-based representation has emerged as a promising approach for modeling semantic relations between pixels and object motion, demonstrating efficacy across various computer vision tasks. In this paper, we introduce a novel one-stage multi-tasking network tailored to extend the offset-based approach to MOTS. Our proposed framework, named OffsetNet, is designed to concurrently address amodal bounding box detection, instance segmentation, and tracking.
View Article and Find Full Text PDFCommun Biol
October 2024
SENSE Research Unit, Queen Square Institute of Neurology, University College London, 33 Queen Square, London, UK.
Self vs. external attribution of motions based on vestibular cues is suggested to underlie our coherent perception of object motion and self-motion. However, it remains unclear whether such attribution also underlies sensorimotor responses.
View Article and Find Full Text PDFACS Nano
October 2024
Key Laboratory of Analog Integrated Circuits and Systems (Ministry of Education), School of Integrated Circuits, Xidian University, Xi'an 710071, China.
Curr Biol
November 2024
Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, NY 14627, USA. Electronic address:
For the brain to compute object motion in the world during self-motion, it must discount the global patterns of image motion (optic flow) caused by self-motion. Optic flow parsing is a proposed visual mechanism for computing object motion in the world, and studies in both humans and monkeys have demonstrated perceptual biases consistent with the operation of a flow-parsing mechanism. However, the neural basis of flow parsing remains unknown.
View Article and Find Full Text PDFNeural Netw
November 2024
JD Explore Academy, Beijing, 102628, China.
Significant progress has been achieved in multi-object tracking (MOT) through the evolution of detection and re-identification (ReID) techniques. Despite these advancements, accurately tracking objects in scenarios with homogeneous appearance and heterogeneous motion remains a challenge. This challenge arises from two main factors: the insufficient discriminability of ReID features and the predominant utilization of linear motion models in MOT.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!