Mobile health (mHealth) intervention systems can employ adaptive strategies to interact with users. Instead of designing such complex strategies manually, reinforcement learning (RL) can be used to adaptively optimize intervention strategies concerning the user's context. In this paper, we focus on the issue of overwhelming interactions when learning a good adaptive strategy for the user in RL-based mHealth intervention agents. We present a data-driven approach integrating psychological insights and knowledge of historical data. It allows RL agents to optimize the strategy of delivering context-aware notifications from empirical data when counterfactual information (user responses when receiving notifications) is missing. Our approach also considers a constraint on the frequency of notifications, which reduces the interaction burden for users. We evaluated our approach in several simulation scenarios using real large-scale running data. The results indicate that our RL agent can deliver notifications in a manner that realizes a higher behavioral impact than context-blind strategies.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8523513PMC
http://dx.doi.org/10.1007/s10916-021-01773-0DOI Listing

Publication Analysis

Top Keywords

mobile health
8
reinforcement learning
8
mhealth intervention
8
notifications
5
optimizing adaptive
4
adaptive notifications
4
notifications mobile
4
health interventions
4
interventions systems
4
systems reinforcement
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!