This paper motivates institutional epistemic trust as an important ethical consideration informing the responsible development and implementation of artificial intelligence (AI) technologies (or AI-inclusivity) in healthcare. Drawing on recent literature on epistemic trust and public trust in science, we start by examining the conditions under which we can have institutional epistemic trust in AI-inclusive healthcare systems and their members as providers of medical information and advice. In particular, we discuss that institutional epistemic trust in AI-inclusive healthcare depends, in part, on the reliability of AI-inclusive medical practices and programs, its knowledge and understanding among different stakeholders involved, its effect on epistemic and communicative duties and burdens on medical professionals and, finally, its interaction and alignment with the public's ethical values and interests as well as background sociopolitical conditions against which AI-inclusive healthcare systems are embedded.
View Article and Find Full Text PDFThis workshop summary on natural language processing (NLP) markers for psychosis and other psychiatric disorders presents some of the clinical and research issues that NLP markers might address and some of the activities needed to move in that direction. We propose that the optimal development of NLP markers would occur in the context of research efforts to map out the underlying mechanisms of psychosis and other disorders. In this workshop, we identified some of the challenges to be addressed in developing and implementing NLP markers-based Clinical Decision Support Systems (CDSSs) in psychiatric practice, especially with respect to psychosis.
View Article and Find Full Text PDF