Wearable devices with embedded sensors can provide personalized healthcare and wellness benefits in digital phenotyping and adaptive interventions. However, the collection, storage, and transmission of biometric data (including processed features rather than raw signals) from these devices pose significant privacy concerns. This quantitative, data-driven study examines the privacy risks associated with wearable-based digital phenotyping practices, with a focus on user , which is the process of identifying participants' IDs from deidentified digital phenotyping datasets. We propose a machine-learning-based computational pipeline to evaluate and quantify model outcomes under various configurations, such as , , and , to investigate the factors influencing ReID risks and their predictive trade-offs. This pipeline leverages features extracted from three wearable sensors, resulting in up to 68.43% accuracy in ReID risk for a sample size of N=45 socially anxious participants based on only descriptive features of 10-second observations. Additionally, we explore the trade-offs between privacy risks and predictive benefits by adjusting various settings (e.g., the ways to process extracted features). Our findings highlight the importance of privacy in digital phenotyping and suggest potential future directions.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11581184PMC
http://dx.doi.org/10.1109/bsn58485.2023.10331378DOI Listing

Publication Analysis

Top Keywords

digital phenotyping
20
privacy risks
12
predictive benefits
8
risks predictive
8
digital
5
phenotyping
5
understanding privacy
4
risks
4
risks versus
4
versus predictive
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!