Background: While clinicians commonly learn heuristics to guide antidepressant treatment selection, surveys suggest real-world prescribing practices vary widely. We aimed to determine the extent to which antidepressant prescriptions were consistent with commonly-advocated heuristics for treatment selection.
Methods: This retrospective longitudinal cohort study examined electronic health records from psychiatry and non-psychiatry practice networks affiliated with two large academic medical centers between March 2008 and December 2017.
Background: With the emergence of evidence-based treatments for treatment-resistant depression, strategies to identify individuals at greater risk for treatment resistance early in the course of illness could have clinical utility. We sought to develop and validate a model to predict treatment resistance in major depressive disorder using coded clinical data from the electronic health record.
Methods: We identified individuals from a large health system with a diagnosis of major depressive disorder receiving an index antidepressant prescription, and used a tree-based machine learning classifier to build a risk stratification model to identify those likely to experience treatment resistance.
AI agents support high stakes decision-making processes from driving cars to prescribing drugs, making it increasingly important for human users to understand their behavior. Policy summarization methods aim to convey strengths and weaknesses of such agents by demonstrating their behavior in a subset of informative states. Some policy summarization methods extract a summary that optimizes the ability to reconstruct the agent's policy under the assumption that users will deploy inverse reinforcement learning.
View Article and Find Full Text PDFAI agents are being developed to help people with high stakes decision-making processes from driving cars to prescribing drugs. It is therefore becoming increasingly important to develop "explainable AI" methods that help people understand the behavior of such agents. Summaries of agent policies can help human users anticipate agent behavior and facilitate more effective collaboration.
View Article and Find Full Text PDFProc AAAI Conf Hum Comput Crowdsourc
October 2019
Recent years have seen a boom in interest in interpretable machine learning systems built on models that can be understood, at least to some degree, by domain experts. However, exactly what kinds of models are truly human-interpretable remains poorly understood. This work advances our understanding of precisely which factors make models interpretable in the context of decision sets, a specific class of logic-based model.
View Article and Find Full Text PDFWe often desire our models to be interpretable as well as accurate. Prior work on optimizing models for interpretability has relied on easy-to-quantify proxies for interpretability, such as sparsity or the number of operations required. In this work, we optimize for interpretability by including humans in the optimization loop.
View Article and Find Full Text PDF