The success of large language models (LLMs) in general areas have sparked a wave of research into their applications in the medical field. However, enhancing the medical professionalism of these models remains a major challenge. This study proposed a novel model training theoretical framework, the M-KAT framework, which integrated domain-specific training methods for LLMs with the unique characteristics of the medical discipline. This framework aimed to improve the medical professionalism of the models from three perspectives: general knowledge acquisition, specialized skill development, and alignment with clinical thinking. This study summarized the outcomes of medical LLMs across four tasks: clinical diagnosis and treatment, medical question answering, medical research, and health management. Using the M-KAT framework, we analyzed the contribution to enhancement of professionalism of models through different training stages. At the same time, for some of the potential risks associated with medical LLMs, targeted solutions can be achieved through pre-training, SFT, and model alignment based on cultivated professional capabilities. Additionally, this study identified main directions for future research on medical LLMs: advancing professional evaluation datasets and metrics tailored to the needs of medical tasks, conducting in-depth studies on medical multimodal large language models (MLLMs) capable of integrating diverse data types, and exploring the forms of medical agents and multi-agent frameworks that can interact with real healthcare environments and support clinical decision-making. It is hoped that predictions of work can provide a reference for subsequent research.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1007/s10916-024-02132-5 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!