A PHP Error was encountered

Severity: Warning

Message: file_get_contents(https://...@pubfacts.com&api_key=b8daa3ad693db53b1410957c26c9a51b4908&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests

Filename: helpers/my_audit_helper.php

Line Number: 176

Backtrace:

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 176
Function: file_get_contents

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 250
Function: simplexml_load_file_from_url

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 1034
Function: getPubMedXML

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3152
Function: GetPubMedArticleOutput_2016

File: /var/www/html/application/controllers/Detail.php
Line: 575
Function: pubMedSearch_Global

File: /var/www/html/application/controllers/Detail.php
Line: 489
Function: pubMedGetRelatedKeyword

File: /var/www/html/index.php
Line: 316
Function: require_once

Automated classification of online reviews of otolaryngologists. | LitMetric

Objectives: The study aimed to extract online comments of otolaryngologists in the 20 most populated cities in the United States from healthgrades.com, develop and validate a natural language processing (NLP) logistic regression algorithm for automated text classification of reviews into 10 categories, and compare 1- and 5-star reviews in directly-physician-related and non-physician-related categories.

Methods: 1977 1-star and 12,682 5-star reviews were collected. The primary investigator manually categorized a training dataset of 324 1-star and 909 5-star reviews, while a validation subset of 100 5-star and 50 1-star reviews underwent dual manual categorization. Using scikit-learn, an NLP algorithm was trained and validated on the subsets, with F1 scores evaluating text classification accuracy against manual categorization. The algorithm was then applied to the entire dataset with comparison of review categorization according to 1- and 5-star reviews.

Results: F1 scores for NLP validation ranged between 0.71 and 0.97. Significant associations emerged between 1-star reviews and treatment plan, accessibility, wait time, office scheduling, billing, and facilities. 5-star reviews were associated with surgery/procedure, bedside manner, and staff/mid-levels.

Conclusion: The study successfully validated an NLP text classification system for categorizing online physician reviews. Positive reviews were found to have an association with directly-physician related context. 1-star reviews were related to treatment plan, accessibility, wait time, office scheduling, billing, and facilities. This method of text classification effectively discerned the nuances of human-written text, providing valuable insights into online healthcare feedback that is scalable.Level of evidence: Level 3.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11558699PMC
http://dx.doi.org/10.1002/lio2.70036DOI Listing

Publication Analysis

Top Keywords

text classification
16
5-star reviews
16
1-star reviews
12
reviews
11
manual categorization
8
reviews treatment
8
treatment plan
8
plan accessibility
8
accessibility wait
8
wait time
8

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!