Severity: Warning
Message: file_get_contents(https://...@pubfacts.com&api_key=b8daa3ad693db53b1410957c26c9a51b4908&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests
Filename: helpers/my_audit_helper.php
Line Number: 176
Backtrace:
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 176
Function: file_get_contents
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 250
Function: simplexml_load_file_from_url
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3122
Function: getPubMedXML
File: /var/www/html/application/controllers/Detail.php
Line: 575
Function: pubMedSearch_Global
File: /var/www/html/application/controllers/Detail.php
Line: 489
Function: pubMedGetRelatedKeyword
File: /var/www/html/index.php
Line: 316
Function: require_once
The performance of deep learning models in the health domain is desperately limited by the scarcity of labeled data, especially for specific clinical-domain tasks. Conversely, there are vastly available clinical unlabeled data waiting to be exploited to improve deep learning models where their training labeled data are limited. This paper investigates the use of task-specific unlabeled data to boost the performance of classification models for the risk stratification of suspected acute coronary syndrome. By leveraging large numbers of unlabeled clinical notes in task-adaptive language model pretraining, valuable prior task-specific knowledge can be attained. Based on such pretrained models, task-specific fine-tuning with limited labeled data produces better performances. Extensive experiments demonstrate that the pretrained task-specific language models using task-specific unlabeled data can significantly improve the performance of the downstream models for specific classification tasks.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10785873 | PMC |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!