-
Notifications
You must be signed in to change notification settings - Fork 179
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] Flatting evaluation #486
[WIP] Flatting evaluation #486
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work @bruAristimunha ! maybe it would make sense to re-write the evaluations from scratch and leave the old ones as deprecated? because their structure will substantially change. I think we can re-use even more code between the different evals
@@ -77,7 +80,7 @@ def __init__( | |||
if not isinstance(paradigm, BaseParadigm): | |||
raise (ValueError("paradigm must be an Paradigm instance")) | |||
self.paradigm = paradigm | |||
|
|||
self.n_splits = n_splits |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should protect this new attribute, or at least raise a warning in case the user changes it. Because one of the purposes of MOABB is to normalize the evaluation of algorithms across BCI research, so it's best if everyone uses 5 folds.
|
||
grid_clf = clone(clf) | ||
# To-do: find a way to expose this for. | ||
# Here, we will have n_splits = n_sessions*n_splits (default 5) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should even have n_splits = n_sessions*n_splits*n_pipelines
.
|
||
grid_clf = clone(clf) | ||
# To-do: find a way to expose this for. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I think that if we want to only have one Parallel
to avoid nesting them, we should put it here instead of across the datasets and subjects for two reasons:
- parallel calls between subjects and datasets means loading a lot of data in parallel so it's not super efficient
- plus, if the user wants to also do parallel calls between datasets or subjects, they can launch multiple scripts, each with a different subject.
@@ -168,7 +172,7 @@ def _evaluate( | |||
results = Parallel(n_jobs=self.n_jobs_evaluation, verbose=1)( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See comment bellow about this Parallel
Co-authored-by: PierreGtch <25532709+PierreGtch@users.noreply.github.com>
I will restart this PR. We changed a lot of stuff in the evaluation file. |
No description provided.