You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, when a model fails during validation (e.g. unsupported parameter), the error is simply logged and the entire model is just skipped.
This raises several concerns:
if AutoML passes a global param incompatible with the training dataset (e.g. wrong default distribution for regression task), then all models will fail one after another but the training will keep going indefinitely, wasting time and resources.
let's imagine that the user passes a param that is only incompatible with one specific algo, then AutoML could be more clever and apply some fallback logic:
** try to train model
** handle validation errors
** if validation errors are parameters errors, reset those params to defaults
** rerun model
** if validation fails again, skip model
The text was updated successfully, but these errors were encountered:
Sebastien Poirier commented: Note: the fallback logic (retry after switching param back to default after validation error) hasn’t been implemented in this ticket. Need to reconsider if it’s a desired behaviour.
Currently, when a model fails during validation (e.g. unsupported parameter), the error is simply logged and the entire model is just skipped.
This raises several concerns:
** try to train model
** handle validation errors
** if validation errors are parameters errors, reset those params to defaults
** rerun model
** if validation fails again, skip model
The text was updated successfully, but these errors were encountered: