Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Alternative participation approach: Assign randomly rather than rank #79

Open
MaxGhenis opened this issue Dec 27, 2018 · 4 comments
Open

Comments

@MaxGhenis
Copy link

MaxGhenis commented Dec 27, 2018

The SNAP README states:

Households are then ranked by the probability of participation, and, for each state's subgroup, their weights are summed until they reach the administrative level.

This approach would work well if the regression models have good fit, i.e. probabilities are either close to 0 or close to 1. But if probabilities are fuzzier, this approach could overstate the correlation between predictors and participation.

As a simple example, suppose P(SNAP) = max(1 - (income / 50000), 0). A ranking approach might assign participation to everyone with income below $25k, but this would overestimate participation among those with income ~$24k and underestimate it among those around $26k, and in general would overstate the correlation between income and participation.

An alternative is to assign participation randomly depending on predicted probabilities. So if the predicted probabilities align with the administrative total (sum(prob) = participation), you assign if their probability exceeds a U(0,1) random number. This would preserve the fuzziness and avoid excessive correlation.

This gets a bit more complicated when the administrative total doesn't equal the sum of probabilities, which it probably won't, but numpy.random.choice can be used here. For example, suppose there are three filers with predicted SNAP participation probabilities of 10%, 50%, and 80%, and the administrative target is two participants. This function will select two filers using the appropriate probability (notebook):

from numpy.random import choice

def weighted_sample(ids, prob_participate, participation_target):
    return choice(ids, p=prob_participate / prob_participate.sum(),
                  size=participation_target,replace=False)

For example:

import numpy as np

ids = np.array([1, 2, 3])
prob_participate = np.array([0.1, 0.5, 0.8])
participation_target = 2

weighted_sample(ids, prob_participate, participation_target)
# array([2, 3])

[2, 3] is the most likely outcome, but 1 will also be selected some of the time since it has nonzero probability. Under the ranking system it might never get assigned participation.

@MaxGhenis
Copy link
Author

@Amy-Xu

@Amy-Xu
Copy link
Member

Amy-Xu commented Jan 2, 2019

This is a valid concern. Dan has also brought up this issue before. Records with positive predictors are a lot more likely to be imputed, while not all of the predictors are available in the CPS dataset. With regard to income, lower income people are much more likely to be imputed than higher income people and on top of that, maybe they are imputed at a higher rate than they 'should be'. Dan has asked Robert Moffitt whether the participation should be imputed by income range according to the estimated probability. His view is, in short, depending on what you want to do.

So I have two questions. First, do you have a 'correct' correlation in mind for income and participation? Or the goal is simply to reserve the fuzziness? Second, is the correlation particularly important your project?

Anyways, I do think it's worthwhile to add on more options for imputing procedure. Your solution is sensible and easy to implement. The caveat, from my perspective, is how to factor in the records weight. CPS weights are population/family/household counts. If we simply implement the weighted sum in the argument p, the weight of each record could confound the actual probability.

@MaxGhenis
Copy link
Author

First, do you have a 'correct' correlation in mind for income and participation? Or the goal is simply to reserve the fuzziness? Second, is the correlation particularly important your project?

A simple example could be modeling the after-tax income distributional impact of repealing SNAP. The current data overstates SNAP participation among low-income HHs, and understates it among higher-income HHs. So SNAP repeal would look more regressive than it actually is. Other reforms like shifting the SNAP budget to UBI would also look overly regressive.

The caveat, from my perspective, is how to factor in the records weight.

Yes that's trickier, and I think would require a looping budget logic rather than numpy.random.choice. I asked for ideas in this SO question. I think it requires adding households one at a time according to the probabilities, until you can't add any more.

@MaxGhenis
Copy link
Author

MaxGhenis commented Jan 3, 2019

Here's the code from my answer to the SO question:

def weighted_budgeted_random_sample(df, budget):
    """ Produce a weighted budgeted random sample.

    Args:
        df: DataFrame with columns for `prob` and `weight`.
        budget: Total weight budget.

    Returns:
        List of index values of df that constitute the sample.

    """
    ids = []
    total = 0
    while total < budget:
        remaining = budget - total
        df = df[df.weight <= remaining]
        # Stop if there are no records with small enough weight.
        if df.shape[0] == 0:
            break
        # Select one record.
        selection = random.choice(df.index, p=(df.prob / df.prob.sum()))
        total += df.loc[selection].weight
        df.drop(selection, inplace=True)
        ids.append(selection)
    return ids

So it'd be called as weighted_budgeted_random_sample(df, budget=total_snap_participation) where df had prob as predicted by the regression model, and weight=s006.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants