Date of this Version
Communication Methods and Measures
Coefficients that assess the reliability of data making processes – coding text, transcribing interviews, or categorizing observations into analyzable terms – are mostly conceptualized in terms of the agreement a set of coders, observers, judges, or measuring instruments exhibit. When variation is low, reliability coefficients reveal their dependency on an often neglected phenomenon, the amount of information that reliability data provide about the reliability of the coding process or the data it generates. This paper explores the concept of reliability, simple agreement, four conceptions of chance to correct that agreement, sources of information deficiency, and develops two measures of information about reliability, akin to the power of a statistical test, intended as a companion to traditional reliability coefficients, especially Krippendorff‟s (2004, pp. 221-250; Hayes & Krippendorff, 2007) alpha.
This is an Author's Accepted Manuscript of an article published in Communication Methods and Measures, 2011, © Taylor & Francis, available online at: http://www.tandfonline.com/10.1080/19312458.2011.568376.
Krippendorff, K. (2011). Agreement and Information in the Reliability of Coding. Communication Methods and Measures, 5 (2), 93-112. https://doi.org/10.1080/19312458.2011.568376
Date Posted: 19 July 2011
This document has been peer reviewed.