Background: Advances in video image analysis and artificial intelligence provide opportunities to transform how patients are evaluated. In this study, we assessed the ability to quantify Zoom video recordings of a standardized neurological examination- the Myasthenia Gravis Core Examination (MG-CE)-designed for telemedicine evaluations.
Methods: We used Zoom (Zoom Video Communications) videos of patients with myasthenia gravis (MG) who underwent the MG-CE. Computer vision, in combination with artificial intelligence methods, was used to develop algorithms to analyze the videos, with a focus on eye and body motions. To assess the examinations involving vocalization, signal processing methods, such as natural language processing (NLP), were developed. A series of algorithms were developed to automatically compute the metrics of the MG-CE.
Results: A total of 51 patients with MG were assessed, with videos recorded twice on separate days, while 15 control subjects were evaluated once. We successfully quantified the positions of the lids, eyes, and arms and developed respiratory metrics based on breath counts. The cheek puff exercise was found to have limited value for quantification. Technical limitations included variations in illumination, bandwidth, and the fact that the recording was conducted from the examiner's side rather than the patient's side.
Conclusion: Several aspects of the MG-CE can be quantified to produce continuous measurements using standard Zoom video recordings. Further development of the technology will enable trained non-physician healthcare providers to conduct precise examinations of patients with MG outside of conventional clinical settings, including for the purpose of clinical trials.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11652356 | PMC |
http://dx.doi.org/10.3389/fneur.2024.1474884 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!