This article describes the process of developing and validating a virtual assistant to perform vaccine pharmacovigilance. We performed a pilot study with a panel of 22 healthcare professionals who performed content validation of the virtual assistant prototype. Usability was tested with 126 users, using the System Usability Scale. The data analysis was performed by the agreement rate and content validity index, and the κ test was used to verify the agreement between the evaluators. The content domains of the virtual assistant achieved excellent suitability, relevance, and representativeness criteria, all greater than 86%; the content validity index ranged from 0.81 to 0.98, with an average of 0.90 and an interrater reliability index of 1.00. There was excellent interrater agreement (average κ value, 0.76). The total usability score among users was 80.1, ranging from 78.2 in group 1 (users without reactions to vaccines) to 82.1 in group 2 (users with reactions) ( P = .002). The virtual assistant for vaccine pharmacovigilance obtained a satisfactory level of content validity and usability, giving greater credibility to the claim that this device provides greater surveillance and safety for patients.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1097/CIN.0000000000000978 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!