Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add option to GLM lambda_search to choose to be more memory efficient #8164

Open
exalate-issue-sync bot opened this issue May 11, 2023 · 1 comment
Open

Comments

@exalate-issue-sync
Copy link

If glm lambda_search=True and cross-validation is enabled and keep_cross_validation_predictions=True, H2O-3 can crash due to OOM. However, Seb would prefer the following:

if lambda_search=True and save_memroy_option=True, lamda search should run in memory efficient mode to find the best lambda-value. However, if the user chooses keep_cross_valiation_predictions=True, lambda search will keep cross-validation predictions for the best model found so far and returned it to the user.

In, addition, make sure that lamda search result is calculated based on a validation dataset or cross-validation.

@h2o-ops
Copy link
Collaborator

h2o-ops commented May 14, 2023

JIRA Issue Migration Info

Jira Issue: PUBDEV-7474
Assignee: New H2O Bugs
Reporter: Wendy
State: Open
Fix Version: N/A
Attachments: N/A
Development PRs: N/A

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant