You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The model_summary calculation in Stacked Ensemble is created by doing a h2o.get_model() on each of the base models to get the algo type, on the client side, which is inefficient. The model summary data should be stored on the backend in a table or string or JSON object (whatever makes sense), and then when you do a h2o.get_model() on the SE, the model_summary should be populated on the client side by via REST (and some client/side post-processing), and then stored in the R and Python SE object.
Relationship to AutoML: Doing a h2o.get_model() on the SE (let's say... after you train the SE in AutoML), will bring SE object into R/Py memory for the first time, which generates the model summary and (currently) requires a get_model on all the base models.
This should also fix the missing model summary in python.
The text was updated successfully, but these errors were encountered:
The model_summary calculation in Stacked Ensemble is created by doing a h2o.get_model() on each of the base models to get the algo type, on the client side, which is inefficient. The model summary data should be stored on the backend in a table or string or JSON object (whatever makes sense), and then when you do a
h2o.get_model()
on the SE, the model_summary should be populated on the client side by via REST (and some client/side post-processing), and then stored in the R and Python SE object.Relationship to AutoML: Doing a
h2o.get_model()
on the SE (let's say... after you train the SE in AutoML), will bring SE object into R/Py memory for the first time, which generates the model summary and (currently) requires a get_model on all the base models.This should also fix the missing model summary in python.
The text was updated successfully, but these errors were encountered: