Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue with custom scorer #14

Closed
mirix opened this issue Aug 4, 2022 · 1 comment
Closed

Issue with custom scorer #14

mirix opened this issue Aug 4, 2022 · 1 comment

Comments

@mirix
Copy link

mirix commented Aug 4, 2022

Hello,

I have an unbalanced dataset and I am trying to create a custom scorer that finds the best possible recall above a given precision for the minority class.

The opposite seems to work well. When I feed the following score to shap-hypertune, it produces consistent results for the precision:

def precision_at_recall(y_true, y_hat):
	precision, recall, thresholds = precision_recall_curve(y_true, y_hat, pos_label=1)
	ix = np.argmax(precision[recall >= .9])
	return 'precision_at_recall', precision[ix], True

The recall and precision for the minority class at a threshold of 0.5 are both around 0.85. If we set a recall above 0.9, the precision decreases accordingly, as expected.

However, the following does not work:

def recall_at_precision(y_true, y_hat):
	precision, recall, thresholds = precision_recall_curve(y_true, y_hat, pos_label=1)
	ix = np.argmax(recall[precision >= .9])
	return 'recall_at_precision', recall[ix], True

It always produces a perfect recall (1), regardless of the precision, even if the precision is set to 1.

@mirix mirix closed this as completed Aug 4, 2022
@mirix
Copy link
Author

mirix commented Aug 4, 2022

Axis issue...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant