An automatic prediction of mental health crises can improve caseload prioritization and enable preventative interventions, improving patient outcomes and reducing costs. We combine structured electronic health records (EHRs) with clinical notes from 59,750 de-identified patients to predict the risk of mental health crisis relapse within the next 28 days. The results suggest that an ensemble machine learning model that relies on structured EHRs and clinical notes when available, and relying solely on structured data when the notes are unavailable, offers superior performance over models trained with either of the two data streams alone.
View Article and Find Full Text PDFBackground: Against a long-term trend of increasing demand, the COVID-19 pandemic has led to a global rise in common mental disorders. Now more than ever, there is an urgent need for scalable, evidence-based interventions to support mental well-being.
Objective: The aim of this proof-of-principle study was to evaluate the efficacy of a mobile-based app in adults with self-reported symptoms of anxiety and stress in a randomized control trial that took place during the first wave of the COVID-19 pandemic in the United Kingdom.
This tutorial paper focuses on the variants of the bottleneck problem taking an information theoretic perspective and discusses practical methods to solve it, as well as its connection to coding and learning aspects. The intimate connections of this setting to remote source-coding under logarithmic loss distortion measure, information combining, common reconstruction, the Wyner-Ahlswede-Korner problem, the efficiency of investment information, as well as, generalization, variational inference, representation learning, autoencoders, and others are highlighted. We discuss its extension to the distributed information bottleneck problem with emphasis on the Gaussian model and highlight the basic connections to the uplink Cloud Radio Access Networks (CRAN) with oblivious processing.
View Article and Find Full Text PDFThe problem of distributed representation learning is one in which multiple sources of information X,…, X are processed separately so as to learn as much information as possible about some ground truth Y. We investigate this problem from information-theoretic grounds, through a generalization of Tishby's centralized Information Bottleneck (IB) method to the distributed setting. Specifically, K encoders, K ≥ 2, compress their observations X,…, X separately in a manner such that, collectively, the produced representations preserve as much information as possible about Y.
View Article and Find Full Text PDF