Severity: Warning
Message: file_get_contents(https://...@gmail.com&api_key=61f08fa0b96a73de8c900d749fcb997acc09&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests
Filename: helpers/my_audit_helper.php
Line Number: 176
Backtrace:
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 176
Function: file_get_contents
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 250
Function: simplexml_load_file_from_url
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3122
Function: getPubMedXML
File: /var/www/html/application/controllers/Detail.php
Line: 575
Function: pubMedSearch_Global
File: /var/www/html/application/controllers/Detail.php
Line: 489
Function: pubMedGetRelatedKeyword
File: /var/www/html/index.php
Line: 316
Function: require_once
Objective: Tissue slides from Oral cavity squamous cell carcinoma (OC-SCC), particularly the epithelial regions, hold morphologic features that are both diagnostic and prognostic. Yet, previously developed approaches for automated epithelium segmentation in OC-SCC have not been independently tested in a multi-center setting. In this study, we aimed to investigate the effectiveness and applicability of a convolutional neural network (CNN) model to perform epithelial segmentation using digitized H&E-stained diagnostic slides from OC-SCC patients in a multi-center setting.
Methods: A CNN model was developed to segment the epithelial regions of digitized slides (n = 810), retrospectively collected from five different centers. Deep learning models were trained and validated using well-annotated tissue microarray (TMA) images (n = 212) at various magnifications. The best performing model was locked down and used for independent testing with a total of 478 whole-slide images (WSIs). Manually annotated epithelial regions were used as the reference standard for evaluation. We also compared the model generated results with IHC-stained epithelium (n = 120) as the reference.
Results: The locked-down CNN model trained on the TMA image training cohorts with 10x magnification achieved the best segmentation performance. The locked-down model performed consistently and yielded Pixel Accuracy, Recall Rate, Precision Rate, and Dice Coefficient that ranged from 95.8% to 96.6%, 79.1% to 93.8%, 85.7% to 89.3%, and 82.3% to 89.0%, respectively for the three independent testing WSI cohorts.
Conclusion: The automated model achieved a consistently accurate performance for automated epithelial region segmentation compared to manual annotations. This model could be integrated into a computer-aided diagnosis or prognosis system.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.oraloncology.2022.105942 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!