Arterial Spin Labelling (ASL) imaging derives a perfusion image by tracing the accumulation of magnetically labeled blood water in the brain. As the image generated has an intrinsically low signal to noise ratio (SNR), multiple measurements are routinely acquired and averaged, at a penalty of increased scan duration and opportunity for motion artefact. However, this strategy alone might be ineffective in clinical settings where the time available for acquisition is limited and patient motion are increased. This study investigates the use of an Independent Component Analysis (ICA) approach for denoising ASL data, and its potential for automation. 72 ASL datasets (pseudo-continuous ASL; 5 different post-labeling delays: 400, 800, 1200, 1600, 2000 m s; total volumes = 60) were collected from thirty consecutive acute stroke patients. The effects of ICA-based denoising (manual and automated) where compared to two different denoising approaches, aCompCor, a Principal Component-based method, and Enhancement of Automated Blood Flow Estimates (ENABLE), an algorithm based on the removal of corrupted volumes. Multiple metrics were used to assess the changes in the quality of the data following denoising, including changes in cerebral blood flow (CBF) and arterial transit time (ATT), SNR, and repeatability. Additionally, the relationship between SNR and number of repetitions acquired was estimated before and after denoising the data. The use of an ICA-based denoising approach resulted in significantly higher mean CBF and ATT values (p < 0.001), lower CBF and ATT variance (p < 0.001), increased SNR (p < 0.001), and improved repeatability (p < 0.05) when compared to the raw data. The performance of manual and automated ICA-based denoising was comparable. These results went beyond the effects of aCompCor or ENABLE. Following ICA-based denoising, the SNR was higher using only 50% of the ASL-dataset collected than when using the whole raw data. The results show that ICA can be used to separate signal from noise in ASL data, improving the quality of the data collected. In fact, this study suggests that the acquisition time could be reduced by 50% without penalty to data quality, something that merits further study. Independent component classification and regression can be carried out either manually, following simple criteria, or automatically.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6711457 | PMC |
http://dx.doi.org/10.1016/j.neuroimage.2019.07.002 | DOI Listing |
Over the past two decades, rapid advancements in magnetic resonance technology have significantly enhanced the imaging resolution of functional Magnetic Resonance Imaging (fMRI), far surpassing its initial capabilities. Beyond mapping brain functional architecture at unprecedented scales, high-spatial-resolution acquisitions have also inspired and enabled several novel analytical strategies that can potentially improve the sensitivity and neuronal specificity of fMRI. With small voxels, one can sample from different levels of the vascular hierarchy within the cerebral cortex and resolve the temporal progression of hemodynamic changes from parenchymal to pial vessels.
View Article and Find Full Text PDFFront Bioeng Biotechnol
May 2024
Institute of Intelligent Machines, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei, China.
Surface electromyogram (sEMG) signals have been widely used in human upper limb force estimation and motion intention recognition. However, the electrocardiogram(ECG) artifact generated by the beating of the heart is a major factor that reduces the quality of the EMG signal when recording the sEMG signal from the muscle close to the heart. sEMG signals contaminated by ECG artifacts are difficult to be understood correctly.
View Article and Find Full Text PDFHum Brain Mapp
December 2023
Division of Behavioral Medicine and Clinical Psychology, Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio, USA.
Preprocessing fMRI data requires striking a fine balance between conserving signals of interest and removing noise. Typical steps of preprocessing include motion correction, slice timing correction, spatial smoothing, and high-pass filtering. However, these standard steps do not remove many sources of noise.
View Article and Find Full Text PDFSensors (Basel)
July 2023
Department of Electrical Engineering, Center for Innovative Research on Aging Society (CIRAS), and Advanced Institute of Manufacturing with High-Tech Innovations (AIM-HI), National Chung Cheng University, Chia-Yi 621, Taiwan.
This paper presents an RGB-NIR (Near Infrared) dual-modality technique to analyze the remote photoplethysmogram (rPPG) signal and hence estimate the heart rate (in beats per minute), from a facial image sequence. Our main innovative contribution is the introduction of several denoising techniques such as Modified Amplitude Selective Filtering (MASF), Wavelet Decomposition (WD), and Robust Principal Component Analysis (RPCA), which take advantage of RGB and NIR band characteristics to uncover the rPPG signals effectively through this Independent Component Analysis (ICA)-based algorithm. Two datasets, of which one is the public PURE dataset and the other is the CCUHR dataset built with a popular Intel RealSense D435 RGB-D camera, are adopted in our experiments.
View Article and Find Full Text PDFApert Neuro
January 2022
Department of Otolaryngology, Washington University in St. Louis, St. Louis, MO, USA.
Subject motion during fMRI can affect our ability to accurately measure signals of interest. In recent years, frame censoring-that is, statistically excluding motion-contaminated data within the general linear model using nuisance regressors-has appeared in several task-based fMRI studies as a mitigation strategy. However, there have been few systematic investigations quantifying its efficacy.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!