-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Early Stopping for HParam Optimization when model is not converging #22
Comments
Hi,
I suggest writing yourself a piece of code that stops the execution if NaN
or Inf is detected. Note that this is not a healthy condition, especially
if it happens very frequently. It might be due to an unideal setting of the
ranges of the hparams set in Orion or a bug in the model. Before running
the hparam tuning phase, my suggestion is to gain some insight
into meaningful ranges for your hyperparameters by manually running single
experiments of single subjects.
…On Tue, Feb 20, 2024 at 12:08 PM Drew Wagner ***@***.***> wrote:
I'm running the run_hparam_optimization.sh in the MOABB benchmarks using
a variation of EEGNet that I developed (for the BNCI2014001 dataset).
Often the hyperparameters chosen by orion lead to a model which does not
converge, and which has an accuracy which is no better than random
guessing. However, the program still runs through all the epochs even
though it is obviously not converging.
Is there a way to prevent this with the currently available options (other
than changing changing the orion flags to adjust the search space)? If not
is this a feature that might be useful to add?
—
Reply to this email directly, view it on GitHub
<#22>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AEA2ZVV5V6EJAFEYHHFES3TYUTKA7AVCNFSM6AAAAABDRRD7TOVHI2DSMVQWIX3LMV43ASLTON2WKOZSGE2DIOBWG42TENY>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
Ok, thanks! I will look into adjusting the search space. |
I just wanted to follow up and say that it did end up being a small bug in the model. Once fixed, the hyper parameter optimization started working great, and appears to be showing some positive results so far. I look forward to sharing the results once the experiment is complete! |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I'm running the
run_hparam_optimization.sh
in the MOABB benchmarks using a variation of EEGNet that I developed (for the BNCI2014001 dataset).Often the hyperparameters chosen by orion lead to a model which does not converge, and which has an accuracy which is no better than random guessing. However, the program still runs through all the epochs even though it is obviously not converging.
Is there a way to prevent this with the currently available options (other than changing changing the orion flags to adjust the search space)? If not is this a feature that might be useful to add?
The text was updated successfully, but these errors were encountered: