Previous dual-task studies examining the locus of semantic interference of distractor words in picture naming have obtained diverging results. In these studies, participants manually responded to tones and named pictures while ignoring distractor words (picture-word interference, PWI) with varying stimulus onset asynchrony (SOA) between tone and PWI stimulus. Whereas some studies observed no semantic interference at short SOAs, other studies observed effects of similar magnitude at short and long SOAs. The absence of semantic interference in some studies may perhaps be due to better reading skill of participants in these than in the other studies. According to such a reading-ability account, participants' reading skill should be predictive of the magnitude of their interference effect at short SOAs. To test this account, we conducted a dual-task study with tone discrimination and PWI tasks and measured participants' reading ability. The semantic interference effect was of similar magnitude at both short and long SOAs. Participants' reading ability was predictive of their naming speed but not of their semantic interference effect, contrary to the reading ability account. We conclude that the magnitude of semantic interference in picture naming during dual-task performance does not depend on reading skill.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1080/17470218.2014.985689 | DOI Listing |
Cogn Neurodyn
December 2025
Image Processing Laboratory, University of Valencia, Valencia, Spain.
In recent years, substantial strides have been made in the field of visual image reconstruction, particularly in its capacity to generate high-quality visual representations from human brain activity while considering semantic information. This advancement not only enables the recreation of visual content but also provides valuable insights into the intricate processes occurring within high-order functional brain regions, contributing to a deeper understanding of brain function. However, considering fusion semantics in reconstructing visual images from brain activity involves semantic-to-image guide reconstruction and may ignore underlying neural computational mechanisms, which does not represent true reconstruction from brain activity.
View Article and Find Full Text PDFBrain Struct Funct
January 2025
CHRIST (Deemed to be University), Bangalore, Karnataka, India.
In this investigation, we delve into the neural underpinnings of auditory processing of Sanskrit verse comprehension, an area not previously explored by neuroscientific research. Our study examines a diverse group of 44 bilingual individuals, including both proficient and non-proficient Sanskrit speakers, to uncover the intricate neural patterns involved in processing verses of this ancient language. Employing an integrated neuroimaging approach that combines functional connectivity-multivariate pattern analysis (fc-MVPA), voxel-based univariate analysis, seed-based connectivity analysis, and the use of sparse fMRI techniques to minimize the interference of scanner noise, we highlight the brain's adaptability and ability to integrate multiple types of information.
View Article and Find Full Text PDFNeural Netw
December 2024
School of Computer and Electronic Information, Guangxi University, University Road, Nanning, 530004, Guangxi, China. Electronic address:
Vision-language navigation (VLN) is a challenging task that requires agents to capture the correlation between different modalities from redundant information according to instructions, and then make sequential decisions on visual scenes and text instructions in the action space. Recent research has focused on extracting visual features and enhancing text knowledge, ignoring the potential bias in multi-modal data and the problem of spurious correlations between vision and text. Therefore, this paper studies the relationship structure between multi-modal data from the perspective of causality and weakens the potential correlation between different modalities through cross-modal causality reasoning.
View Article and Find Full Text PDFSensors (Basel)
December 2024
College of Information Engineering, Henan University of Science and Technology, Luoyang 471023, China.
In order to achieve infrared aircraft detection under interference conditions, this paper proposes an infrared aircraft detection algorithm based on high-resolution feature-enhanced semantic segmentation network. Firstly, the designed location attention mechanism is utilized to enhance the current-level feature map by obtaining correlation weights between pixels at different positions. Then, it is fused with the high-level feature map rich in semantic features to construct a location attention feature fusion network, thereby enhancing the representation capability of target features.
View Article and Find Full Text PDFSci Rep
January 2025
School of Information and Communication Engineering, North University of China, Taiyuan, 030051, China.
The Insulated Gate Bipolar Transistor (IGBT) is a crucial power semiconductor device, and the integrity of its internal structure directly influences both its electrical performance and long-term reliability. However, the precise semantic segmentation of IGBT ultrasonic tomographic images poses several challenges, primarily due to high-density noise interference and visual distortion caused by target warping. To address these challenges, this paper constructs a dedicated IGBT ultrasonic tomography (IUT) dataset using Scanning Acoustic Microscopy (SAM) and proposes a lightweight Multi-Scale Fusion Network (LMFNet) aimed at improving segmentation accuracy and processing efficiency in ultrasonic images analysis.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!