In this meta-analysis, we describe a benchmark value of delay and probability discounting reliability and stability that might be used to (a) evaluate the meaningfulness of clinically achieved changes in discounting and (b) support the role of discounting as a valid and enduring measure of intertemporal choice. We examined test-retest reliability, stability effect sizes (d; Cohen, 1992), and relevant moderators across 30 publications comprising 39 independent samples and 262 measures of discounting, identified via a systematic review of PsychInfo, PubMed, and Google Scholar databases. We calculated omnibus effect-size estimates and evaluated the role of proposed moderators using a robust variance estimation meta-regression method. The meta-regression output reflected modest test-retest reliability, r = .670, p < .001, 95% CI [.618, .716]. Discounting was most reliable when measured in the context of temporal constraints, in adult respondents, when using money as a medium, and when reassessed within 1 month. Testing also suggested acceptable stability via nonsignificant and small changes in effect magnitude over time, d = 0.048, p = .31, 95% CI [-0.051, 0.146]. Clinicians and researchers seeking to measure discounting can consider the contexts when reliability is maximized for specific cases.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11078611PMC
http://dx.doi.org/10.1002/jeab.910DOI Listing

Publication Analysis

Top Keywords

test-retest reliability
12
reliability stability
12
systematic review
8
delay probability
8
probability discounting
8
discounting
5
review meta-analysis
4
meta-analysis test-retest
4
reliability
4
stability delay
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!