Inter-Coder Agreement in One-to-Many Classification: Fuzzy Kappa.

PLoS One

The Department of Tourism, Recreation and Sport Management, University of Florida, P.O. Box 118208, Gainesville, FL, 32611-8208, United States of America.

Published: July 2016

AI Article Synopsis

Article Abstract

Content analysis involves classification of textual, visual, or audio data. The inter-coder agreement is estimated by making two or more coders to classify the same data units, with subsequent comparison of their results. The existing methods of agreement estimation, e.g., Cohen's kappa, require that coders place each unit of content into one and only one category (one-to-one coding) from the pre-established set of categories. However, in certain data domains (e.g., maps, photographs, databases of texts and images), this requirement seems overly restrictive. The restriction could be lifted, provided that there is a measure to calculate the inter-coder agreement in the one-to-many protocol. Building on the existing approaches to one-to-many coding in geography and biomedicine, such measure, fuzzy kappa, which is an extension of Cohen's kappa, is proposed. It is argued that the measure is especially compatible with data from certain domains, when holistic reasoning of human coders is utilized in order to describe the data and access the meaning of communication.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4775035PMC
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0149787PLOS

Publication Analysis

Top Keywords

inter-coder agreement
12
agreement one-to-many
8
fuzzy kappa
8
cohen's kappa
8
data domains
8
data
5
one-to-many classification
4
classification fuzzy
4
kappa
4
kappa content
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!