Using the wisdom of the crowds to find critical errors in biomedical ontologies: a study of SNOMED CT.

J Am Med Inform Assoc

Stanford Center for Biomedical Informatics Research, Stanford University, Stanford, California, USA Biomedical Informatics Training Program, Stanford University, Stanford, California, USA.

Published: May 2015

Objectives: The verification of biomedical ontologies is an arduous process that typically involves peer review by subject-matter experts. This work evaluated the ability of crowdsourcing methods to detect errors in SNOMED CT (Systematized Nomenclature of Medicine Clinical Terms) and to address the challenges of scalable ontology verification.

Methods: We developed a methodology to crowdsource ontology verification that uses micro-tasking combined with a Bayesian classifier. We then conducted a prospective study in which both the crowd and domain experts verified a subset of SNOMED CT comprising 200 taxonomic relationships.

Results: The crowd identified errors as well as any single expert at about one-quarter of the cost. The inter-rater agreement (κ) between the crowd and the experts was 0.58; the inter-rater agreement between experts themselves was 0.59, suggesting that the crowd is nearly indistinguishable from any one expert. Furthermore, the crowd identified 39 previously undiscovered, critical errors in SNOMED CT (eg, 'septic shock is a soft-tissue infection').

Discussion: The results show that the crowd can indeed identify errors in SNOMED CT that experts also find, and the results suggest that our method will likely perform well on similar ontologies. The crowd may be particularly useful in situations where an expert is unavailable, budget is limited, or an ontology is too large for manual error checking. Finally, our results suggest that the online anonymous crowd could successfully complete other domain-specific tasks.

Conclusions: We have demonstrated that the crowd can address the challenges of scalable ontology verification, completing not only intuitive, common-sense tasks, but also expert-level, knowledge-intensive tasks.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5566196PMC
http://dx.doi.org/10.1136/amiajnl-2014-002901DOI Listing

Publication Analysis

Top Keywords

errors snomed
12
crowd
9
critical errors
8
biomedical ontologies
8
address challenges
8
challenges scalable
8
scalable ontology
8
ontology verification
8
crowd identified
8
inter-rater agreement
8

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!