Background: The lack of machine-interpretable representations of consent permissions precludes development of tools that act upon permissions across information ecosystems, at scale.

Objectives: To report the process, results, and lessons learned while annotating permissions in clinical consent forms.

Methods: We conducted a retrospective analysis of clinical consent forms. We developed an annotation scheme following the MAMA (Model-Annotate-Model-Annotate) cycle and evaluated interannotator agreement (IAA) using observed agreement ( ), weighted ( ), and Krippendorff's .

Results: The final dataset included 6,399 sentences from 134 clinical consent forms. Complete agreement was achieved for 5,871 sentences, including 211 positively identified and 5,660 negatively identified as permission-sentences across all three annotators (  = 0.944, Krippendorff's  = 0.599). These values reflect moderate to substantial IAA. Although permission-sentences contain a set of common words and structure, disagreements between annotators are largely explained by lexical variability and ambiguity in sentence meaning.

Conclusion: Our findings point to the complexity of identifying permission-sentences within the clinical consent forms. We present our results in light of lessons learned, which may serve as a launching point for developing tools for automated permission extraction.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8221844PMC
http://dx.doi.org/10.1055/s-0041-1730032DOI Listing

Publication Analysis

Top Keywords

clinical consent
20
consent forms
16
lessons learned
12
annotating permissions
8
permissions clinical
8
consent
6
clinical
5
learned identifying
4
identifying annotating
4
permissions
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!