-
Notifications
You must be signed in to change notification settings - Fork 8
Krippendorff's alpha coefficient
The alpha coefficient is a chance-adjusted index for the reliability of categorical measurements. It estimates chance agreement using an average-distribution-based approach. Like Scott's pi coefficient, it assumes that observers have a conspired "quota" for each category that they work together to meet. However, unlike pi, it also attempts to correct for sample size and yields a higher reliability score than the pi coefficient when the reliability experiment includes fewer items.
- mALPHAK %Calculates alpha using vectorized formulas
Use these formulas with two raters and two (dichotomous) categories:
Use these formulas with multiple raters, multiple categories, and any weighting scheme:
- Krippendorff, K. (1970). Estimating the reliability, systematic error and random error of interval data. Educational and Psychological Measurement, 30(1), 61–70.
- Krippendorff, K. (1980). Content analysis: An introduction to its methodology. Newbury Park, CA: Sage Publications.
- Hayes, A. F., & Krippendorff, K. (2007). Answering the call for a standard reliability measure for coding data. Communication Methods and Measures, 1(1), 77–89.
- Gwet, K. L. (2014). Handbook of inter-rater reliability: The definitive guide to measuring the extent of agreement among raters (4th ed.). Gaithersburg, MD: Advanced Analytics.