Global monitoring of disease vectors is undoubtedly becoming an urgent need as the human population rises and becomes increasingly mobile, international commercial exchanges increase, and climate change expands the habitats of many vector species. Traditional surveillance of mosquitoes, vectors of many diseases, relies on catches, which requires regular manual inspection and reporting, and dedicated personnel, making large-scale monitoring difficult and expensive. New approaches are solving the problem of scalability by relying on smartphones and the Internet to enable novel community-based and digital observatories, where people can upload pictures of mosquitoes whenever they encounter them. An example is the Mosquito Alert citizen science system, which includes a dedicated mobile phone app through which geotagged images are collected. This system provides a viable option for monitoring the spread of various mosquito species across the globe, although it is partly limited by the quality of the citizen scientists' photos. To make the system useful for public health agencies, and to give feedback to the volunteering citizens, the submitted images are inspected and labeled by entomology experts. Although citizen-based data collection can greatly broaden disease-vector monitoring scales, manual inspection of each image is not an easily scalable option in the long run, and the system could be improved through automation. Based on Mosquito Alert's curated database of expert-validated mosquito photos, we trained a deep learning model to find tiger mosquitoes (Aedes albopictus), a species that is responsible for spreading chikungunya, dengue, and Zika among other diseases. The highly accurate 0.96 area under the receiver operating characteristic curve score promises not only a helpful pre-selector for the expert validation process but also an automated classifier giving quick feedback to the app participants, which may help to keep them motivated. In the paper, we also explored the possibilities of using the model to improve future data collection quality as a feedback loop.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7907246 | PMC |
http://dx.doi.org/10.1038/s41598-021-83657-4 | DOI Listing |
Med Phys
January 2025
Department of Oncology, The Affiliated Hospital of Southwest Medical University, Luzhou, China.
Background: Kidney tumors, common in the urinary system, have widely varying survival rates post-surgery. Current prognostic methods rely on invasive biopsies, highlighting the need for non-invasive, accurate prediction models to assist in clinical decision-making.
Purpose: This study aimed to construct a K-means clustering algorithm enhanced by Transformer-based feature transformation to predict the overall survival rate of patients after kidney tumor resection and provide an interpretability analysis of the model to assist in clinical decision-making.
Geroscience
January 2025
Department of Neurology, Ewha Womans University Mokdong Hospital, Ewha Womans University College of Medicine, Seoul, Republic of Korea.
Background: Superagers, older adults with exceptional cognitive abilities, show preserved brain structure compared to typical older adults. We investigated whether superagers have biologically younger brains based on their structural integrity.
Methods: A cohort of 153 older adults (aged 61-93) was recruited, with 63 classified as superagers based on superior episodic memory and 90 as typical older adults, of whom 64 were followed up after two years.
J Imaging Inform Med
January 2025
Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, National Clinical Research Center for Eye Disease, Shanghai, 200080, China.
The objectives of this study are to construct a deep convolutional neural network (DCNN) model to diagnose and classify meibomian gland dysfunction (MGD) based on the in vivo confocal microscope (IVCM) images and to evaluate the performance of the DCNN model and its auxiliary significance for clinical diagnosis and treatment. We extracted 6643 IVCM images from the three hospitals' IVCM database as the training set for the DCNN model and 1661 IVCM images from the other two hospitals' IVCM database as the test set to examine the performance of the model. Construction of the DCNN model was performed using DenseNet-169.
View Article and Find Full Text PDFJ Imaging Inform Med
January 2025
Department of Orthopedic Surgery, Arrowhead Regional Medical Center, Colton, CA, USA.
Rib pathology is uniquely difficult and time-consuming for radiologists to diagnose. AI can reduce radiologist workload and serve as a tool to improve accurate diagnosis. To date, no reviews have been performed synthesizing identification of rib fracture data on AI and its diagnostic performance on X-ray and CT scans of rib fractures and its comparison to physicians.
View Article and Find Full Text PDFJ Imaging Inform Med
January 2025
College of Engineering, Department of Computer Engineering, Koç University, Rumelifeneri Yolu, 34450, Sarıyer, Istanbul, Turkey.
This study explores a transfer learning approach with vision transformers (ViTs) and convolutional neural networks (CNNs) for classifying retinal diseases, specifically diabetic retinopathy, glaucoma, and cataracts, from ophthalmoscopy images. Using a balanced subset of 4217 images and ophthalmology-specific pretrained ViT backbones, this method demonstrates significant improvements in classification accuracy, offering potential for broader applications in medical imaging. Glaucoma, diabetic retinopathy, and cataracts are common eye diseases that can cause vision loss if not treated.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!