Severity: Warning
Message: file_get_contents(https://...@pubfacts.com&api_key=b8daa3ad693db53b1410957c26c9a51b4908&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests
Filename: helpers/my_audit_helper.php
Line Number: 176
Backtrace:
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 176
Function: file_get_contents
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 250
Function: simplexml_load_file_from_url
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3122
Function: getPubMedXML
File: /var/www/html/application/controllers/Detail.php
Line: 575
Function: pubMedSearch_Global
File: /var/www/html/application/controllers/Detail.php
Line: 489
Function: pubMedGetRelatedKeyword
File: /var/www/html/index.php
Line: 316
Function: require_once
Background: There is increasing interest in shared decision making (SDM) in Australia. Question prompt lists (QPLs) support question asking by patients, a key part of SDM. QPLs have been studied in a variety of settings, and increasingly the internet provides a source of suggested questions for patients. Environmental scans have been shown to be useful in assessing the availability and quality of online SDM tools.
Objective: This study aimed to assess the number and readability of QPLs available to users via Google.com.au.
Methods: Our environmental scan used search terms derived from literature and reputable websites to search for QPLs available via Google.com.au. Following removal of duplicates from the 4000 URLs and 22 reputable sites, inclusion and exclusion criteria were applied to create a list of unique QPLs. A sample of 20 QPLs was further assessed for list length, proxy measures of quality such as a date of review, and evidence of doctor endorsement. Readability of the sample QPL instructions and QPLs themselves was assessed using Flesch Reading Ease and Flesch-Kincaid Grade Level scores.
Results: Our environmental scan identified 173 unique QPLs available to users. Lists ranged in length from 1 question to >200 questions. Of our sample, 50% (10/20) had a listed date of creation or update, and 60% (12/20) had evidence of authorship or source. Flesch-Kincaid Grade Level scores for instructions were higher than for the QPLs (grades 10.3 and 7.7, respectively). There was over a 1 grade difference between QPLs from reputable sites compared with other sites (grades 4.2 and 5.4, respectively).
Conclusions: People seeking questions to ask their doctor using Google.com.au encounter a vast number of question lists that they can use to prepare for consultations with their doctors. Markers of the quality or usefulness of various types of online QPLs, either surrogate or direct, have not yet been established, which makes it difficult to assess the value of the abundance of lists. Doctor endorsement of question asking has previously been shown to be an important factor in the effectiveness of QPLs, but information regarding this is not readily available online. Whether these diverse QPLs are endorsed by medical practitioners warrants further investigation.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7293062 | PMC |
http://dx.doi.org/10.2196/17002 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!