Presenting different visual object stimuli can elicit detectable changes in EEG recordings, but this is typically observed only after averaging together data from many trials and many participants. We report results from a simple visual object recognition experiment where independent component analysis (ICA) data processing and machine learning classification were able to correctly distinguish presence of visual stimuli at around 87% (0.70 AUC, p<0.0001) accuracy within single trials, using data from single ICs. Seven subjects observed a series of everyday visual object stimuli while EEG was recorded. The task was to indicate whether or not they recognised each object as familiar to them. EEG or IC data from a subset of initial object presentations was used to train support vector machine (SVM) classifiers, which then generated a label for subsequent data. Task-label classifier accuracy gives a proxy measure of task-related information present in the data used to train. This allows comparison of EEG data processing techniques - here, we found selected single ICs that give higher performance than when classifying from any single scalp EEG channel (0.70 AUC vs 0.65 AUC, p<0.0001). Most of these single selected ICs were found in occipital regions. Scoring a sliding analysis window moving through the time-points of the trial revealed that peak accuracy is when using data from +75 to +125 ms relative to the object appearing on screen. We discuss the use of such classification and potential cognitive implications of differential accuracy on IC activations.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.jneumeth.2014.02.014 | DOI Listing |
Sci Rep
January 2025
Laboratory of Chemical Biology, Changchun Institute of Applied Chemistry, Chinese Academy of Sciences, Changchun, 130022, Jilin, China.
In order to address the issue of tracking errors of collision Caenorhabditis elegans, this research proposes an improved particle filter tracking method integrated with cultural algorithm. The particle filter algorithm is enhanced through the integration of the sine cosine algorithm, thereby facilitating uninterrupted tracking of the target C. elegans.
View Article and Find Full Text PDFSensors (Basel)
January 2025
Department of Architectural Engineering, Dankook University, 152 Jukjeon-ro, Yongin-si 16890, Republic of Korea.
In the construction industry, ensuring the proper installation, retention, and dismantling of temporary structures, such as jack supports, is critical to maintaining safety and project timelines. However, inconsistencies between on-site data and construction documentation remain a significant challenge. To address this, this study proposes an integrated monitoring framework that combines computer vision-based object detection and document recognition techniques.
View Article and Find Full Text PDFSensors (Basel)
January 2025
The 54th Research Institute, China Electronics Technology Group Corporation, College of Signal and Information Processing, Shijiazhuang 050081, China.
The multi-sensor fusion, such as LiDAR and camera-based 3D object detection, is a key technology in autonomous driving and robotics. However, traditional 3D detection models are limited to recognizing predefined categories and struggle with unknown or novel objects. Given the complexity of real-world environments, research into open-vocabulary 3D object detection is essential.
View Article and Find Full Text PDFSensors (Basel)
January 2025
Engineering Training Center, Nantong University, Nantong 226019, China.
The issue of obstacle avoidance and safety for visually impaired individuals has been a major topic of research. However, complex street environments still pose significant challenges for blind obstacle detection systems. Existing solutions often fail to provide real-time, accurate obstacle avoidance decisions.
View Article and Find Full Text PDFSensors (Basel)
January 2025
College of Metrology Measurement and Instrument, China Jiliang University, Hangzhou 310018, China.
This paper aims to address the challenge of precise robotic grasping of molecular sieve drying bags during automated packaging by proposing a six-dimensional (6D) pose estimation method based on an red green blue-depth (RGB-D) camera. The method consists of three components: point cloud pre-segmentation, target extraction, and pose estimation. A minimum bounding box-based pre-segmentation method was designed to minimize the impact of packaging wrinkles and skirt curling.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!