Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Early Stopping for HParam Optimization when model is not converging #22

Closed
Drew-Wagner opened this issue Feb 20, 2024 · 3 comments
Closed

Comments

@Drew-Wagner
Copy link
Contributor

I'm running the run_hparam_optimization.sh in the MOABB benchmarks using a variation of EEGNet that I developed (for the BNCI2014001 dataset).

Often the hyperparameters chosen by orion lead to a model which does not converge, and which has an accuracy which is no better than random guessing. However, the program still runs through all the epochs even though it is obviously not converging.

Is there a way to prevent this with the currently available options (other than changing changing the orion flags to adjust the search space)? If not is this a feature that might be useful to add?

@mravanelli
Copy link
Contributor

mravanelli commented Feb 20, 2024 via email

@Drew-Wagner
Copy link
Contributor Author

Ok, thanks! I will look into adjusting the search space.

@Drew-Wagner
Copy link
Contributor Author

I just wanted to follow up and say that it did end up being a small bug in the model. Once fixed, the hyper parameter optimization started working great, and appears to be showing some positive results so far. I look forward to sharing the results once the experiment is complete!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants