New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature/sensitivity low alert #2001
Feature/sensitivity low alert #2001
Conversation
Codecov Report
@@ Coverage Diff @@
## main #2001 +/- ##
=========================================
+ Coverage 100.0% 100.0% +0.1%
=========================================
Files 278 280 +2
Lines 22770 22886 +116
=========================================
+ Hits 22761 22877 +116
Misses 9 9
Continue to review full report at Codecov.
|
Please advise on how to resolve this. I only have one GitHub account, so not sure what is going on here! Thanks! |
Hi @skvorekn , thanks for contributing! Regarding your question about the CLA, it appears the email you've made your commits with doesn't match one of the emails in your GitHub account. Here's a guide from GitHub on how to set your email address on the GitHub site, including a section on how to set it on the command-line so My recommendation would be to set this to your GitHub account email and then squash your changes:
If that doesn't work please let us know. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Solid. I like this PR and the testing framework. Thanks a lot for the contribution!
5e79138
to
808ab3a
Compare
I misunderstood the comment about squashing changes, so did that out of order. Looks like things are passing now, though! Also made a few minor updates based on @chukarsten's comments. Thanks for your feedback - I really appreciate it! Let me know if there is anything else. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is wonderful! The code looks great. My only comment was about the class name.
@skvorekn I think you might need to resign our CLA to pass our CI check. You were this close 🤏 to making it into the last release . I also rebased your remote branch so you'll probably have to pull. |
docs: update api reference with SLA refactor: X is not a used argument in objective function feat: initialize SLA as an objective feat: raise error if alert rate is invalid feat: binary objective test base class test: sla using base class refactor: define data in run_pipeline test: class flags match expected given list of predictions and alert rate style: rename object to obj test: expected sensitivity score chore: remove commented out get_data fn style: rename for abstract method feat: test all binary base class tests with wrapper test style: linting fixes refactor: scope of fixtures is within class fix: wrong exception raised fix: rename base class tests to methods so test doesn't run refactor: move binary base class to existing file test: 8 binary problem types Revert "docs: update api reference with SLA" This reverts commit e26bf77. chore: rebase with main style: sort imports fix: remove X argument from docstring feat: Exception to ValueError feat: Exception to ValueError
This reverts commit e26bf77.
…vorekn/evalml into feature/sensitivity-low-alert
…vorekn/evalml into feature/sensitivity-low-alert
5fb07e0
to
d2b5c42
Compare
Done! Looks like the CLA needs a resign by @chukarsten as well! |
@skvorekn merged! We'll release it next week with 0.22.0. |
Sensitivity at low alert rates is an evaluation metric used in the final round of the Centers for Medicare & Medicaid Services AI Health Outcomes Challenge.
It is an important clinical accuracy metric because it gives insight into how resource reallocation will impact outcomes. For example, if we are predicting mortality of the Medicare population, a hospital may want to allocate more resources to people with the top, say, 1% of risk scores (predictions). This 1% is the alert rate. As a measure of accuracy, the hospital may want to know what percent of all deaths are captured by focusing on the top 1% (sensitivity).
The sensitivity metric is calculated by classifying the top 1% of predictions as the 'True' class (we predict they will die), and the remaining 99% are classified as 'False' (we predict they will not die). Sensitivity is measured using this alert rate (# predicted true/# actual true).