-
Notifications
You must be signed in to change notification settings - Fork 181
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
attempt to simplify some examples + fix stuff for recent mne #367
Conversation
Codecov Report
@@ Coverage Diff @@
## master #367 +/- ##
==========================================
- Coverage 82.74% 82.73% -0.01%
==========================================
Files 53 53
Lines 3714 3718 +4
==========================================
+ Hits 3073 3076 +3
- Misses 641 642 +1 |
@robintibor @hubertjb @gemeinl I think I am done here. See rendered doc at: https://1786-232335424-gh.circle-artifacts.com/0/dev/index.html let me know if you see anything weird. |
Fixed small typo that I missed in PR before @agramfort |
ahya this with the |
Fixed small typo that I missed in PR before @agramfort
<https://github.com/agramfort>
thx
In general I don't have objections except I am wondering about the early
stopping on the validation set. Normally, I would expect there to be a
train/val/test split if one uses early stopping, now it looks like we are
doing early stopping on the set that we report metrics on... might be quite
misunderstandable no? @agramfort <https://github.com/agramfort>
that's a very good point. So maybe we should update the examples to use a
train, valid and test sets?
|
Well seems like a bigger thing to do that? why did you want to introduce early stopping? |
Well seems like a bigger thing to do that? why did you want to introduce
early stopping?
well see these examples are educational materials. People get started with
deep learning
on EEG from such examples. What do we want them to do? What would you
recommend
them?
In general, can we maybe merge this as is, make release now and do this
separately?
I can revert the changes on EarlyStopping if you want as I agree with you
that without
a valid / test sets it's not great...
|
Yeah I would revert. I would say general deep learning experience as well as our experience suggests that in many settings, with the right training pipeline (e.g. right regularizations and hyperparameters), there is no reason for early stopping: Longer training just keeps increasing validation scores or at least does not decrease them. So I think training without early stopping is also something beginners should learn as the default way. |
so if you would revert I would make the release :) |
done
|
Also keep in mind both popular learning rate schedules like cosine annealing and some state-of-the-art techniques to further improve accuracies like exponential moving average are not that compatible with early stopping https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/ |
great, merged |
Also keep in mind both popular learning rate schedules like cosine
annealing and some state-of-the-art techniques to further improve
accuracies like exponential moving average are not that compatible with
early stopping
https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/
which part of this page are you point me to?
|
This as an example of a very state of the art training pipeline for deep learning computer vision and does not use early stopping. Also, parts that are at least not really in the spirit of using early stopping from my view:
|
Also in general wanted to bring this pipeline to your attention, several of the methods there are fairly generic and could also be tried for our deep learning pipelines on EEG |
we should have a chat ;)
|
@hubertjb @robintibor @gemeinl
can you have a look?
if we could simplify further before the next release it would be great