You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
From investigation of AutoML results between {{rel-yau-6}} and {{rel-yau-11}}, we notice a degradation of performance in XGBoost training.
See the discussion at [https://h2oai.slack.com/archives/C0E0ADTM1/p1576683431088900?thread_ts=1576604011.073300&cid=C0E0ADTM1|https://h2oai.slack.com/archives/C0E0ADTM1/p1576683431088900?thread_ts=1576604011.073300&cid=C0E0ADTM1]
Suggestions:
disable parallel training of CV models for XGBoost if GPU is enabled: method {{XGBoost#nModelsInParallel}}.
use {{updater= grow_gpu_hist}} on GPU.
The text was updated successfully, but these errors were encountered:
Jan Sterba commented: fixed, the rafactor to using tree_method for GPU had the side-effect of suing gpu_predictor withch probably made CV models slower
From investigation of AutoML results between {{rel-yau-6}} and {{rel-yau-11}}, we notice a degradation of performance in XGBoost training.
See the discussion at [https://h2oai.slack.com/archives/C0E0ADTM1/p1576683431088900?thread_ts=1576604011.073300&cid=C0E0ADTM1|https://h2oai.slack.com/archives/C0E0ADTM1/p1576683431088900?thread_ts=1576604011.073300&cid=C0E0ADTM1]
Suggestions:
The text was updated successfully, but these errors were encountered: