Severity: Warning
Message: file_get_contents(https://...@pubfacts.com&api_key=b8daa3ad693db53b1410957c26c9a51b4908&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests
Filename: helpers/my_audit_helper.php
Line Number: 176
Backtrace:
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 176
Function: file_get_contents
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 250
Function: simplexml_load_file_from_url
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 1034
Function: getPubMedXML
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3152
Function: GetPubMedArticleOutput_2016
File: /var/www/html/application/controllers/Detail.php
Line: 575
Function: pubMedSearch_Global
File: /var/www/html/application/controllers/Detail.php
Line: 489
Function: pubMedGetRelatedKeyword
File: /var/www/html/index.php
Line: 316
Function: require_once
Cervical cancer is a severe threat to women's health worldwide with a long cancerous cycle and a clear etiology, making early screening vital for the prevention and treatment. Based on the dataset provided by the Obstetrics and Gynecology Hospital of Fudan University, a four-category classification model for cervical lesions including Normal, low-grade squamous intraepithelial lesion (LSIL), high-grade squamous intraepithelial lesion (HSIL) and cancer (Ca) is developed. Considering the dataset characteristics, to fully utilize the research data and ensure the dataset size, the model inputs include original and acetic colposcopy images, lesion segmentation masks, human papillomavirus (HPV), thinprep cytologic test (TCT) and age, but exclude iodine images that have a significant overlap with lesions under acetic images. Firstly, the change information between original and acetic images is introduced by calculating the acetowhite opacity to mine the correlation between the acetowhite thickness and lesion grades. Secondly, the lesion segmentation masks are utilized to introduce prior knowledge of lesion location and shape into the classification model. Lastly, a cross-modal feature fusion module based on the self-attention mechanism is utilized to fuse image information with clinical text information, revealing the features correlation. Based on the dataset used in this study, the proposed model is comprehensively compared with five excellent models over the past three years, demonstrating that the proposed model has superior classification performance and a better balance between performance and complexity. The modules ablation experiments further prove that each proposed improved module can independently improve the model performance.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.compbiomed.2024.108589 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!