Departmental Papers (ASC)

Document Type

Journal Article

Date of this Version

2-10-2008

Publication Source

Communication Methods and Measure

Volume

2

Issue

4

Start Page

323

Last Page

338

DOI

10.1080/19312450802467134

Abstract

Reliability is an important bottleneck for content analysis and similar methods for generating analyzable data. This is because the analysis of complex qualitative phenomena such as texts, social interactions, and media images easily escape physical measurement and call for human coders to describe what they read or observe. Owing to the individuality of coders, the data they generate for subsequent analysis are prone to errors not typically found in mechanical measuring devices. However, most measures that are designed to indicate whether data are sufficiently reliable to warrant analysis do not differentiate among kinds of disagreement that prevent data from being reliable. This paper distinguishes two kinds of disagreement, systematic disagreement and random disagreement, and suggests measures of them in conjunction with the agreement coefficient α (alpha) (Krippendorff, 2004a, pp. 211-256). These measures, previously proposed for interval data (Krippendorff, 1970), are here developed for nominal data. Their importance lies in their ability to not only aid the development of reliable coding instructions but also warn researchers about two kinds of errors they face when using imperfect data.

Copyright/Permission Statement

This is an electronic version of an article published in Communication Methods and Measure. Communication Methods and Measure is available online at: http://dx.doi.org/10.1080/19312450802467134

Included in

Communication Commons

Share

COinS
 

Date Posted: 06 October 2010

This document has been peer reviewed.