Severity: Warning
Message: file_get_contents(https://...@pubfacts.com&api_key=b8daa3ad693db53b1410957c26c9a51b4908&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests
Filename: helpers/my_audit_helper.php
Line Number: 176
Backtrace:
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 176
Function: file_get_contents
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 250
Function: simplexml_load_file_from_url
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3122
Function: getPubMedXML
File: /var/www/html/application/controllers/Detail.php
Line: 575
Function: pubMedSearch_Global
File: /var/www/html/application/controllers/Detail.php
Line: 489
Function: pubMedGetRelatedKeyword
File: /var/www/html/index.php
Line: 316
Function: require_once
Many social scientists appear to possess an overconfidence in the reliability of research results from a single, small-sample, inferential study. In this article, the authors speculate that "user-friendly" statistics packages have the potential to exacerbate statistical misinterpretation by providing researchers with a tool to explore data easily and identify what is interpreted as "reliable" relationships. This article contains an empirical demonstration of the potential problems that arise when a large number of statistical tests are interpreted. Results show that statistically significant results may be unreliable. Also, a zero relationship can erroneously appear as a medium to large effect size relationship when a small sample is used (e.g., n = 30). The authors suggest the need for multiple replications as the criterion of a reliable finding.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1080/00223980009600857 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!