In the present study, we investigate how spatial attention, driven by unisensory and multisensory cues, can bias the access of information into visuo-spatial working memory (VSWM). In a series of four experiments, we compared the effectiveness of spatially-nonpredictive visual, auditory, or audiovisual cues in capturing participants' spatial attention towards a location where to-be-remembered visual stimuli were or were not presented (cued/uncued trials, respectively). The results suggest that the effect of peripheral visual cues in biasing the access of information into VSWM depend on the size of the attentional focus, while auditory cues did not have direct effects in biasing VSWM. Finally, spatially congruent multisensory cues showed an enlarged attentional effect in VSWM as compared to unimodal visual cues, as a likely consequence of multisensory integration. This latter result sheds new light on the interplay between spatial attention and VSWM, pointing to the special role exerted by multisensory (audiovisual) cues.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1037/a0023513 | DOI Listing |
Sensors (Basel)
January 2025
Department of Computer Science, King AbdulAziz University, Jeddah 21589, Saudi Arabia.
Traffic flow prediction is a pivotal element in Intelligent Transportation Systems (ITSs) that provides significant opportunities for real-world applications. Capturing complex and dynamic spatio-temporal patterns within traffic data remains a significant challenge for traffic flow prediction. Different approaches to effectively modeling complex spatio-temporal correlations within traffic data have been proposed.
View Article and Find Full Text PDFSensors (Basel)
January 2025
Department of AI & Big Data, Honam University, Gwangju 62399, Republic of Korea.
This study proposes an advanced plant disease classification framework leveraging the Attention Score-Based Multi-Vision Transformer (Multi-ViT) model. The framework introduces a novel attention mechanism to dynamically prioritize relevant features from multiple leaf images, overcoming the limitations of single-leaf-based diagnoses. Building on the Vision Transformer (ViT) architecture, the Multi-ViT model aggregates diverse feature representations by combining outputs from multiple ViTs, each capturing unique visual patterns.
View Article and Find Full Text PDFSensors (Basel)
January 2025
School of Information Engineering, China University of Geosciences, Beijing 100083, China.
Extracting fragmented cropland is essential for effective cropland management and sustainable agricultural development. However, extracting fragmented cropland presents significant challenges due to its irregular and blurred boundaries, as well as the diversity in crop types and distribution. Deep learning methods are widely used for land cover classification.
View Article and Find Full Text PDFSensors (Basel)
January 2025
Department of Information Technology, Quaid e Awam University, Nawabshah 67450, Pakistan.
Detection of anomalies in video surveillance plays a key role in ensuring the safety and security of public spaces. The number of surveillance cameras is growing, making it harder to monitor them manually. So, automated systems are needed.
View Article and Find Full Text PDFSensors (Basel)
December 2024
School of Physics and Electronics, Nanning Normal University, Nanning 530100, China.
Remote sensing change detection (RSCD), which utilizes dual-temporal images to predict change locations, plays an essential role in long-term Earth observation missions. Although many deep learning based RSCD models perform well, challenges remain in effectively extracting change information between dual-temporal images and fully leveraging interactions between their feature maps. To address these challenges, a constraint- and interaction-based network (CINet) for RSCD is proposed.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!