Skip to content

Chance adjusted index

Jeffrey Girard edited this page Nov 14, 2022 · 2 revisions

Overview

Chance-adjusted indices estimate the amount of agreement between raters that can be expected to have occurred due to chance (i.e., random guessing). This estimation is accomplished in different ways by different indices; each index makes its own assumptions which can lead to paradoxical results when violated. Chance-adjusted indices calculate reliability as the ratio of observed non-chance agreement to possible non-chance agreement.

$$\text{Reliability} = \frac{p_o - p_c}{1 - p_c}$$

$p_o$ is percent observed agreement

$p_c$ is percent chance agreement

Chance agreement in simplified formulas

Numerous approaches to estimating chance agreement with two raters and dichotomous categories have been proposed. In general, they adopt one of three main approaches. The category-based approach is adopted by Bennett et al.'s S score; the individual-distribution-based approach is adopted by Cohen's kappa coefficient; and the average-distribution-based approach is adopted by Scott's pi coefficient, Gwet's gamma coefficient, and Krippendorff's alpha coefficient.


$$p_c^S = \frac{1}{q}$$

$$p_c^\kappa = \left(\frac{n_{1+}}{n}\right) \left(\frac{n_{+1}}{n}\right) + \left(\frac{n_{2+}}{n}\right) \left(\frac{n_{+2}}{n}\right)$$

$$m_1 = \frac{n_{+1} + n_{1+}}{2}$$

$$m_1 = \frac{n_{+2} + n_{2+}}{2}$$

$$p_c^\pi = \left(\frac{m_1}{n}\right) \left(\frac{m_1}{n}\right) + \left(\frac{m_2}{n}\right) \left(\frac{m_2}{n}\right)$$

$$p_c^\gamma = \left(\frac{m_1}{n}\right) \left(\frac{m_2}{n}\right) + \left(\frac{m_1}{n}\right) \left(\frac{m_2}{n}\right)$$

$$p_c^\alpha = \left(\frac{2m_1}{2n}\right) \left(\frac{2m_1 - 1}{2n - 1}\right) + \left(\frac{2m_2}{2n}\right) \left(\frac{2m_2 - 1}{2n - 1}\right)$$


$q$ is the total number of categories

$n_{1+}$ is the number of items rater $r_1$ assigned to category $k_1$

$n_{2+}$ is the number of items rater $r_1$ assigned to category $k_2$

$n_{+1}$ is the number of items rater $r_2$ assigned to category $k_1$

$n_{+2}$ is the number of items rater $r_2$ assigned to category $k_2$

table

Chance agreement in generalized formulas

Coming soon...

References

  1. Zhao, X., Liu, J. S., & Deng, K. (2012). Assumptions behind inter-coder reliability indices. In C. T. Salmon (Ed.), Communication Yearbook (pp. 418–480). Routledge.
  2. Gwet, K. L. (2014). Handbook of inter-rater reliability: The definitive guide to measuring the extent of agreement among raters (4th ed.). Gaithersburg, MD: Advanced Analytics.