Feature-based attention has been shown to aid object perception. Our previous ERP effects revealed temporally late feature-based modulation in response to objects relative to motion. The aim of the current study was to confirm the timing of feature-based influences on object perception while cueing within the feature dimension of shape. Participants were told to expect either "pillow" or "flower" objects embedded among random white and black lines. Participants more accurately reported the object's main color for valid compared to invalid shapes. ERPs revealed modulation from 252-502 ms, from occipital to frontal electrodes. Our results are consistent with previous findings examining the time course for processing similar stimuli (illusory contours). Our results provide novel insights into how attending to features of higher complexity aids object perception presumably via feed-forward and feedback mechanisms along the visual hierarchy.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1111/psyp.12174 | DOI Listing |
PLoS One
January 2025
Center for Cognitive Science, Institute for Convergence Science and Technology (ICST), Sharif University of Technology, Tehran, Iran.
The brain can remarkably adapt its decision-making process to suit the dynamic environment and diverse aims and demands. The brain's flexibility can be classified into three categories: flexibility in choosing solutions, decision policies, and actions. We employ two experiments to explore flexibility in decision policy: a visual object categorization task and an auditory object categorization task.
View Article and Find Full Text PDFJ Exp Psychol Learn Mem Cogn
December 2024
Technical University of Darmstadt, Institute of Psychology.
The goal of the present investigation was to perform a registered replication of Jones and Macken's (1995b) study, which showed that the segregation of a sequence of sounds to distinct locations reduced the disruptive effect on serial recall. Thereby, it postulated an intriguing connection between auditory stream segregation and the cognitive mechanisms underlying the irrelevant speech effect. Specifically, it was found that a sequence of changing utterances was less disruptive in stereophonic presentation, allowing each auditory object (letters) to be allocated to a unique location (right ear, left ear, center), compared to when the same sounds were played monophonically.
View Article and Find Full Text PDFComput Struct Biotechnol J
December 2024
The State Key Laboratory of Digital Medical Engineering, Jiangsu Key Laboratory of Remote Measurement and Control, School of Instrument Science and Engineering, Southeast University, Nanjing 210096, China.
Object handover is a fundamental task for collaborative robots, particularly service robots. In in-home assistance scenarios, individuals often face constraints due to their posture and declining physical functions, necessitating high demands on robots for flexible real-time control and intuitive interactions. During robot-to-human handovers, individuals are limited to making perceptual judgements based on the appearance of the object and the consistent behaviour of the robot.
View Article and Find Full Text PDFTrends Hear
January 2025
Faculty of Medicine and University Hospital Cologne, Department of Otorhinolaryngology, Head and Neck Surgery, University of Cologne, Cologne, Germany.
Speech-on-speech masking is a common and challenging situation in everyday verbal communication. The ability to segregate competing auditory streams is a necessary requirement for focusing attention on the target speech. The Visual World Paradigm (VWP) provides insight into speech processing by capturing gaze fixations on visually presented icons that reflect the speech signal.
View Article and Find Full Text PDFSensors (Basel)
December 2024
Institute of Computer and Communication Engineering, Department of Electrical Engineering, National Cheng Kung University, Tainan 701, Taiwan.
Precision depth estimation plays a key role in many applications, including 3D scene reconstruction, virtual reality, autonomous driving and human-computer interaction. Through recent advancements in deep learning technologies, monocular depth estimation, with its simplicity, has surpassed the traditional stereo camera systems, bringing new possibilities in 3D sensing. In this paper, by using a single camera, we propose an end-to-end supervised monocular depth estimation autoencoder, which contains an encoder with a structure with a mixed convolution neural network and vision transformers and an effective adaptive fusion decoder to obtain high-precision depth maps.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!