# Scott's pi coefficient

Jeffrey M Girard edited this page Jan 3, 2017 · 35 revisions

#### Overview

The pi coefficient is a chance-adjusted index for the reliability of categorical measurements. It is thus meant to equal the ratio of "observed nonchance-agreement" to "possible nonchance-agreement" and to answer the question: how often did the raters agree when they weren't guessing?

The pi coefficient estimates chance agreement using an average-distribution-based approach. Specifically, it assumes that raters engage in a chance-based process to determine whether to classify each item randomly or deliberately prior to inspecting it. It assumes that the likelihood of raters randomly assigning an item to the same category is based on the product of raters' average distributions for each category.

Zhao et al. (2012) described these assumptions using the following metaphor (in the case of two categories for simplicity). All raters have a shared "quota" for how many items they must, on average, assign to each category. One rater then places two sets of marbles into a shared urn, where each set corresponds to one of the two categories. Each set has a number of marbles corresponding to its category's quota and has its own color. For each item, each rater draws a marble randomly from the urn, notes its color, and then puts it back. If both raters drew the same color, then both raters classify that item randomly by classifying it into the category that corresponds to the color that was drawn (without inspecting the item at all). Only if the raters drew different colors would they classify the item deliberately by inspecting the item and comparing its features to the established category membership criteria. Each rater keeps track of the number of items he or she has assigned to each category; whenever a coder reaches his or her quota for a category, he or she stops drawing from the urn and begins classifying all items to the other category in order to meet its quota.

#### History

Scott (1955) proposed the pi coefficient to estimate the reliability of two raters assigning items to nominal categories. Fleiss (1971) extended the pi coefficient to accommodate multiple raters. Then, Gwet (2014) generalized the pi coefficient to accommodate multiple raters, any weighting scheme, and missing data. The generalized formulas provided here, and instantiated in the provided function, correspond to Gwet's formulation (which he refers to as the generalized Fleiss' kappa coefficient). It is also worth noting that several other reliability indices are equivalent to Scott's pi coefficient including Siegel & Castellan's (1988) revised kappa coefficient and Byrt, Bishop, and Carlin's (1993) bias-adjusted kappa coefficient.

#### MATLAB Functions

• mSCOTTPI %Calculates pi using vectorized formulas

#### Simplified Formulas

Use these formulas with two raters and two (dichotomous) categories:

is the number of items both raters assigned to category 1

is the number of items both raters assigned to category 2

is the total number of items

is the number of items rater 1 assigned to category 1

is the number of items rater 1 assigned to category 2

is the number of items rater 2 assigned to category 1

is the number of items rater 2 assigned to category 2

#### Generalized Formulas

Use these formulas with multiple raters, multiple categories, and any weighting scheme:

is the total number of categories

is the weight associated with two raters assigning an item to categories and

is the number of raters that assigned item to category

is the number of items that were coded by two or more raters

is the number of raters that assigned item to category

is the number of raters that assigned item to any category

is the total number of items

#### References

1. Scott, W. A. (1955). Reliability of content analysis: The case of nominal scaling. Public Opinion Quarterly, 19(3), 321–325.
2. Fleiss, J. L. (1971). Measuring nominal scale agreement among many raters. Psychological Bulletin, 76(5), 378–382.
3. Siegel, S., & Castellan, N. J. (1988). Nonparametric statistics for the behavioural sciences. New York, NY: McGraw-Hill.
4. Byrt, T., Bishop, J., & Carlin, J. B. (1993). Bias, prevalence and kappa. Journal of Clinical Epidemiology, 46, 423–429.
5. Gwet, K. L. (2014). Handbook of inter-rater reliability: The definitive guide to measuring the extent of agreement among raters (4th ed.). Gaithersburg, MD: Advanced Analytics.
##### Clone this wiki locally
You can’t perform that action at this time.
Press h to open a hovercard with more details.