Since scenes in nature are highly dynamic, perception requires an on-going and robust integration of local information into global representations. In vision, contour integration (CI) is one of these tasks, and it is performed by our brain in a seemingly effortless manner. Following the rule of good continuation, oriented line segments are linked into contour percepts, thus supporting important visual computations such as the detection of object boundaries. This process has been studied almost exclusively using static stimuli, raising the question of whether the observed robustness and "pop-out" quality of CI carries over to dynamic scenes. We investigate contour detection in dynamic stimuli where targets appear at random times by Gabor elements aligning themselves to form contours. In briefly presented displays (230 ms), a situation comparable to classical paradigms in CI, performance is about 87%. Surprisingly, we find that detection performance decreases to 67% in extended presentations (about 1.9-3.8 s) for the same target stimuli. In order to observe the same reduction with briefly presented stimuli, presentation time has to be drastically decreased to intervals as short as 50 ms. Cueing a specific contour position or shape helps in partially compensating this deterioration, and only in extended presentations combining a location and a shape cue was more efficient than providing a single cue. Our findings challenge the notion of CI as a mainly stimulus-driven process leading to pop-out percepts, indicating that top-down processes play a much larger role in supporting fundamental integration processes in dynamic scenes than previously thought.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5591827 | PMC |
http://dx.doi.org/10.3389/fpsyg.2017.01501 | DOI Listing |
In image-guided radiotherapy (IGRT), four-dimensional cone-beam computed tomography (4D-CBCT) is critical for assessing tumor motion during a patients breathing cycle prior to beam delivery. However, generating 4D-CBCT images with sufficient quality requires significantly more projection images than a standard 3D-CBCT scan, leading to extended scanning times and increased imaging dose to the patient. To address these limitations, there is a strong demand for methods capable of reconstructing high-quality 4D-CBCT images from a 1-minute 3D-CBCT acquisition.
View Article and Find Full Text PDFPsychophysiology
January 2025
Department of Psychology, University of Georgia, Athens, Georgia, USA.
Emotional experiences involve dynamic multisensory perception, yet most EEG research uses unimodal stimuli such as naturalistic scene photographs. Recent research suggests that realistic emotional videos reliably reduce the amplitude of a steady-state visual evoked potential (ssVEP) elicited by a flickering border. Here, we examine the extent to which this video-ssVEP measure compares with the well-established Late Positive Potential (LPP) that is reliably larger for emotional relative to neutral scenes.
View Article and Find Full Text PDFCult Health Sex
January 2025
Department of Management, Bogazici University, Istanbul, Türkiye.
This paper examines the motivations and experiences of older French-speaking men who relocate to Thailand driven by the desire for a more fulfilling and liberated lifestyle that contrasts with their experiences in their home countries. Through an analysis of video interviews with 31 expatriates available online, the study reveals a prevalent trend among these men to initially engage in short-term sexual relationships, enjoying the freedoms of Thailand's vibrant social scene. However, as they acclimate to their new environment, a significant shift towards long-term partnerships is observed, marking a transition from transient interactions to more meaningful connections.
View Article and Find Full Text PDFSensors (Basel)
January 2025
Shanghai Film Academy, Shanghai University, Shanghai 200072, China.
The advancement of neural radiance fields (NeRFs) has facilitated the high-quality 3D reconstruction of complex scenes. However, for most NeRFs, reconstructing 3D tissues from endoscopy images poses significant challenges due to the occlusion of soft tissue regions by invalid pixels, deformations in soft tissue, and poor image quality, which severely limits their application in endoscopic scenarios. To address the above issues, we propose a novel framework to reconstruct high-fidelity soft tissue scenes from low-quality endoscopic images.
View Article and Find Full Text PDFPLoS One
January 2025
Department of Otolaryngology-Head and Neck Surgery, Icahn School of Medicine at Mount Sinai, New York, NY, United States of America.
Objective: What we hear may influence postural control, particularly in people with vestibular hypofunction. Would hearing a moving subway destabilize people similarly to seeing the train move? We investigated how people with unilateral vestibular hypofunction and healthy controls incorporated broadband and real-recorded sounds with visual load for balance in an immersive contextual scene.
Design: Participants stood on foam placed on a force-platform, wore the HTC Vive headset, and observed an immersive subway environment.
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!