Severity: Warning
Message: file_get_contents(https://...@pubfacts.com&api_key=b8daa3ad693db53b1410957c26c9a51b4908&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests
Filename: helpers/my_audit_helper.php
Line Number: 176
Backtrace:
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 176
Function: file_get_contents
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 250
Function: simplexml_load_file_from_url
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 1034
Function: getPubMedXML
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3152
Function: GetPubMedArticleOutput_2016
File: /var/www/html/application/controllers/Detail.php
Line: 575
Function: pubMedSearch_Global
File: /var/www/html/application/controllers/Detail.php
Line: 489
Function: pubMedGetRelatedKeyword
File: /var/www/html/index.php
Line: 316
Function: require_once
Background: Despite increasing interest in how conversational agents might improve health care delivery and information dissemination, there is limited research assessing the quality of health information provided by these technologies, especially in orthognathic surgery (OGS).
Purpose: This study aimed to measure and compare the quality of four virtual assistants (VAs) in addressing the frequently asked questions about OGS.
Study Design, Setting, And Sample: This in-silico cross-sectional study assessed the responses of a sample of four VAs through a standardized set of 10 questionnaires related to OGS.
Independent Variable: The independent variables were the four VAs. The four VAs tested were VA1: Alexa (Seattle, Washington), VA2: Google Assistant (Google Mountain View, California), VA3: Siri (Cupertino, California), and VA4: Bing (San Diego, California).
Main Outcome Variable(s): The primary outcome variable was the quality of the answers generated by the four VAs. Four investigators (two orthodontists and two oral surgeons) assessed the quality of response of the four VAs through a standardized set of 10 questionnaires using a five-point modified Likert scale, with the lowest score (1) signifying the highest quality. The main outcome variables measured were the combined mean scores of the responses from each VA, and the secondary outcome assessed was the variability in responses among the different investigators.
Covariates: None.
Analyses: One-way analysis of variance was done to compare the average scores per question. One-way analysis of variance followed by Tukey's post hoc analyses was done to compare the combined mean scores among the VAs, and the combined mean scores of all questions were evaluated to determine variability if any among different VA's responses to the investigators.
Results: Among the four VAs, VA4 (1.32 ± 0.57) had significantly the lowest (best) score, followed by VA2 (1.55 ± 0.78), VA1 (2.67 ± 1.49), and VA3 (3.52 ± 0.50) (P value <.001). There were no significant differences in how the VAs: VA3 (P value = .46), VA4 (P value = .45), and VA2 (P value = .44) responded to each of the investigators except VA1 (P value = .003).
Conclusion And Relevance: The VAs responded to the queries related to OGS, with VA4 displaying the best quality response, followed by VA2, VA1, and VA3. Technology companies and clinical organizations should partner for an intelligent VA with evidence-based responses specifically curated to educate patients.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.joms.2024.04.013 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!