Background: Parkinson's disease (PD) is a prevalent long-term neurodegenerative disease. Though the criteria of PD diagnosis are relatively well defined, current diagnostic procedures using medical images are labor-intensive and expertise-demanding. Hence, highly integrated automatic diagnostic algorithms are desirable.

Methods: In this work, we propose an end-to-end multi-modality diagnostic framework, including segmentation, registration, feature extraction and machine learning, to analyze the features of striatum for PD diagnosis. Multi-modality images, including T1-weighted MRI and C-CFT PET, are integrated into the proposed framework. The reliability of this method is validated on a dataset with the paired images from 49 PD subjects and 18 Normal (NL) subjects.

Results: We obtained a promising diagnostic accuracy in the PD/NL classification task. Meanwhile, several comparative experiments were conducted to validate the performance of the proposed framework.

Conclusion: We demonstrated that (1) the automatic segmentation provides accurate results for the diagnostic framework, (2) the method combining multi-modality images generates a better prediction accuracy than the method with single-modality PET images, and (3) the volume of the striatum is proved to be irrelevant to PD diagnosis.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6716425PMC
http://dx.doi.org/10.3389/fnins.2019.00874DOI Listing

Publication Analysis

Top Keywords

multi-modality images
12
parkinson's disease
8
diagnosis multi-modality
8
diagnostic framework
8
images
6
diagnostic
5
fully automatic
4
framework
4
automatic framework
4
framework parkinson's
4

Similar Publications

Cardiac resynchronization therapy (CRT) offers significant benefits in symptom alleviation, reduction of rehospitalization rates, and overall survival of patients with heart failure (HF) with reduced ejection fraction (rEF). However, despite its proven efficacy, precisely identifying suitable CRT candidates remains a challenge, with a notable proportion of patients experiencing non-response. Accordingly, many attempts have been made to enhance patient selection, and to identify the best imaging parameters to predict the response and survival after CRT implantation.

View Article and Find Full Text PDF

Current dorsal skin flap window chambers with flat glass windows are compatible with optical coherence tomography (OCT) and multiphoton microscopy (MPM) imaging. However, light sheet fluorescence microscopy (LSFM) performs best with a cylindrical or spherical sample located between its two 90° objectives and when all sample materials have the same index of refraction (). A modified window chamber with a domed viewing window made from fluorinated ethylene propylene (FEP), with n similar to water and tissue, was designed.

View Article and Find Full Text PDF

Purpose: Information retrieval (IR) and risk assessment (RA) from multi-modality imaging and pathology reports are critical to prostate cancer (PC) treatment. This study aims to evaluate the performance of four general-purpose large language model (LLMs) in IR and RA tasks.

Materials And Methods: We conducted a study using simulated text reports from computed tomography, magnetic resonance imaging, bone scans, and biopsy pathology on stage IV PC patients.

View Article and Find Full Text PDF

Aims: This study compared echocardiography (echo) and cardiac computed tomography (CT) in measuring the Wilkins score and evaluated the potential added benefit of CT in predicting immediate percutaneous mitral valvuloplasty (PMV) outcomes in rheumatic mitral stenosis (MS) patients deemed eligible for PMV by echo.

Methods And Results: From a multicentre registry of 3,140 patients with at least moderate MS, we included 96 patients (age 56.4±11.

View Article and Find Full Text PDF

In medical image segmentation, although multi-modality training is possible, clinical translation is challenged by the limited availability of all image types for a given patient. Different from typical segmentation models, modality-agnostic (MAG) learning trains a single model based on all available modalities but remains input-agnostic, allowing a single model to produce accurate segmentation given any modality combinations. In this paper, we propose a novel frame-work, MAG learning through Multi-modality Self-distillation (MAG-MS), for medical image segmentation.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!