Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Let the DispReconstructor also compute a score for the sign prediction #2479

Merged
merged 1 commit into from Dec 7, 2023

Conversation

maxnoe
Copy link
Member

@maxnoe maxnoe commented Dec 6, 2023

No description provided.

Copy link

codecov bot commented Dec 6, 2023

Codecov Report

All modified and coverable lines are covered by tests ✅

Comparison is base (49695e7) 92.44% compared to head (6b7fbfd) 92.44%.
Report is 2 commits behind head on main.

Additional details and impacted files
@@           Coverage Diff           @@
##             main    #2479   +/-   ##
=======================================
  Coverage   92.44%   92.44%           
=======================================
  Files         234      234           
  Lines       19900    19917   +17     
=======================================
+ Hits        18396    18413   +17     
  Misses       1504     1504           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

LukasBeiske
LukasBeiske previously approved these changes Dec 6, 2023
Copy link
Contributor

@LukasBeiske LukasBeiske left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changelog missing

@@ -681,12 +682,17 @@ def _predict(self, key, table):
else:
prediction[valid] = valid_norms

prediction[valid] *= self._models[key][1].predict(X)
sign_proba = self._models[key][1].predict_proba(X)[:, 0]
# proba is [0 and 1] where 0 => very certain -1, 1 => very certain 1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand this explanation, is it supposed to say that sign_proba goes from -1 to 1?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The classes predicted by sign are "1" and "-1" and the [:,0] component of the output of predict_proba are the likelihood scores for the "1" class. Therefore a score of 0 equals a high likelihood of class "-1".

@@ -945,7 +964,7 @@ def _cross_validate_regressor(self, telescope_type, train, test):
prediction, _ = regressor._predict(telescope_type, test)
truth = test[regressor.target]
r2 = r2_score(truth, prediction)
return prediction, truth, {"R^2": r2}
return {f"{regressor.prefix}_energy": prediction}, truth, {"R^2": r2}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like there isn't any test for CrossValidator ? Not really something to resolve in this PR I guess...

Copy link
Member Author

@maxnoe maxnoe Dec 6, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is no direct test, yes.

However the code is covered by the tests for the train tools which also inspect the cross validation output.

@maxnoe
Copy link
Member Author

maxnoe commented Dec 7, 2023

Here a plot of the distribution for LST in prod5 (only 50k events, simple model config):

disp_score

Looks as expected, I would say

@maxnoe maxnoe merged commit f86411d into main Dec 7, 2023
13 of 14 checks passed
@maxnoe maxnoe deleted the disp_score branch December 7, 2023 13:28
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants