-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GLM lambda search if given multiple alphas should return best results across all alpha/lambda values #8157
Comments
Sebastien Poirier commented: Also, shouldn’t the same logic apply if user provides multiple lambdas? as {{_lambda}} is also an array. |
Wendy commented: Yes, I think that should be the case whenever we have an array of alpha/lambda values. |
Wendy commented: Notice one thing: when there are multiple lambdas, current implementation will not restart at initial value but would rather continue a new lambda value training with beta/gram matrix calculation from last lambda. I know when you turn on lambda search, we will have warm start but if you are just trying out individual lambda/alpha values, this should not have happened. |
Wendy commented: I think better yet is to give user a choice to choose cold start if they want to. |
Erin LeDell commented: [~accountid:557058:1f01b471-f37b-40af-bae9-a18b38e24549] Should this ticket also address that there’s currently no way to inspect the model to see which value of alpha (and maybe also lambda) were used when an array of values is passed? At least in R, the place we store params stores the whole array (since that was the input), and there’s no way to find out which alpha was selected. I thought there was a Jira for this, but I can’t find it… |
JIRA Issue Migration Info Jira Issue: PUBDEV-7481 Linked PRs from JIRA |
Current version h2o-3 if provided multiple alphas and with GLM lambda_search enabled seems to return the best model only for the first alpha ({{_alpha[0]}}), ignoring the other alphas.
When providing multiple alphas ({{_alpha}} is an array), it is expected that GLM will try them all (except if lambda=0), possibly applying a lambda search to each of them), and return the model with the best alpha/lambda combination.
The text was updated successfully, but these errors were encountered: