Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AutoML optimizations: Fewer neurons in DL grids & Stacked Ensemble metalearner improvements #7579

Closed
exalate-issue-sync bot opened this issue May 11, 2023 · 2 comments
Assignees

Comments

@exalate-issue-sync
Copy link

We determined via benchmarking that making these two changes improve the performance of the Stacked Ensembles in AutoML, generally. It might not be an improvement on all datasets, but on a large, diverse range of datasets, we found improvements in this combination.

  • Fewer neurons in the DL grids
  • Add a logit transform to the CV preds before training the Stacked Ensemble metalearner
  • -Search alpha = [0.5, 1.0] for regularization in the GLM metalearner in Stacked Ensembles in AutoML. Previously, we used the default of 0.5 only, but the extra chance at trying more regularization (for cases of many models, especially bad ones found via random search), helps performance.- (this was reverted later due to a bug)
@h2o-ops
Copy link
Collaborator

h2o-ops commented May 14, 2023

JIRA Issue Details

Jira Issue: PUBDEV-8070
Assignee: Erin LeDell
Reporter: Erin LeDell
State: Resolved
Fix Version: 3.32.1.1
Attachments: N/A
Development PRs: Available

@h2o-ops
Copy link
Collaborator

h2o-ops commented May 14, 2023

Linked PRs from JIRA

#5401
#5392

@h2o-ops h2o-ops closed this as completed May 14, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants