Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question on the Over/Under sampling on validation and test splits #49

Closed
brosscle opened this issue Aug 23, 2022 · 2 comments
Closed

Question on the Over/Under sampling on validation and test splits #49

brosscle opened this issue Aug 23, 2022 · 2 comments

Comments

@brosscle
Copy link

Hi,
I defined a classification hyperpipe that involves a PipelineElement that Oversamples or Undersamples the input dataset. I would like to know if this step is done only on the training split of the nested cross validation or also on the validation and test splits ? Actually, I would like to know if the metrics computed to select the best models and to evaluate them are only computed on the "real" samples and not on the "real + fake" ones (in case of an oversampling), and if it is computed on all the samples and not only the selected ones in case of an undersampling.
Do you know the answer or maybe a document where I can search for the answer ? I have not found it on the documentation but maybe I searched #badly.
Thanks a lot !
Clément

@RLeenings
Copy link
Collaborator

Dear Clément,
please excuse the belated answer.
The over and undersampling is SKIPPED in the evaluation for validation and test predictions.
The performance metrics are calculated ONLY on the "real" samples, as you said.
Hope that helps.
Ramona

@brosscle
Copy link
Author

Hi, thanks a lot, that's exactly the answer that I was waiting for :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants