New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG]: Target Transformation with reversivle transformers leads to faulty scoring #236
Comments
Here's where the Extended Scorer transforms the y (always) julearn/julearn/scoring/available_scorers.py Lines 178 to 182 in dba3071
This is where the scorers are "wrapped" only if the julearn/julearn/scoring/available_scorers.py Lines 161 to 164 in dba3071
This is where the julearn/julearn/scoring/available_scorers.py Lines 127 to 160 in dba3071
This is where Lines 348 to 350 in dba3071
Here are the two lines that set Line 251 in dba3071
Line 321 in dba3071
So we always use the extended scorer, even if the y transformer is reversible. And in this specific case, scikit-learn transforms the |
Also want to report something I observed before: Although I got the wrong scaled metrics when z-score target, but I found the Pearson correlation values for z-score target or not are the same. Is that expected? Since I found the metrics are always different when I z-score target or not by myself. Also example: https://chat.openai.com/share/f625997a-eb50-40af-9cbb-89d450cdb364 |
Is there an existing issue for this?
Current Behavior
Using z-scoring leads to a wrong scoring as probably we evaluate the correctly inverse-transformed prediction to a scored ground truth. You can see that as r2_corr seems fine but r2 shows a high error as its scale sensitive.
See the following image.
Expected Behavior
scoring with inversible scorers scores against the original ground truth
Steps To Reproduce
Environment
Relevant log output
No response
Anything else?
No response
The text was updated successfully, but these errors were encountered: