Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LightGBM causes Optuna to go out of memory and crash #687

Open
AkshayNovacene opened this issue Dec 13, 2023 · 2 comments
Open

LightGBM causes Optuna to go out of memory and crash #687

AkshayNovacene opened this issue Dec 13, 2023 · 2 comments

Comments

@AkshayNovacene
Copy link

AkshayNovacene commented Dec 13, 2023

I was wondering if there was a way to batch the inputs when using Optuna or reduce the load on it.

Optuna seems to go out of memory and crash on google colab when I try to run it on my big dataset. This is observed only when using LightGBM, but other models seem to work fine.

@pplonski
Copy link
Contributor

Hi @AkshayNovacene,

LightGBM has some issues with memory management microsoft/LightGBM#4239

The solution would be to train each model in separate process, to always release full memory after training. I'm thinking about using Ray framework https://github.com/ray-project/ray for this (you can train each model in separate process and you can distribute training on multiple machines).

@AkshayNovacene
Copy link
Author

AkshayNovacene commented Dec 14, 2023

Thanks for the prompt reply. I am still trying to dig into the issue because lightGBM works fine with Optuna when I use Optuna out of the box. I was using the lgb.LGBMClassifier component for training manually. It goes out of memory only when I use the optuna mode through the automl package and try to train Lightgbm.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants