Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

model_history -> save_best_model_per_estimator #283

Merged
merged 21 commits into from
Nov 18, 2021
Merged

model_history -> save_best_model_per_estimator #283

merged 21 commits into from
Nov 18, 2021

Conversation

sonichi
Copy link
Collaborator

@sonichi sonichi commented Nov 15, 2021

  1. if save_best_model_per_estimator is False and retrain_final is True, unfit the model after evaluation in HPO.
  2. retrain if using ray.
  3. update ITER_HP in config after a trial is finished.
  4. change prophet logging level.
  5. example and notebook update.
  6. allow settings to be passed to AutoML constructor. Are you planning to add multi-output-regression capability to FLAML #192 Is multi-tasking allowed? #277 can pass the auotml setting to the constructor instead of requiring a derived class.
  7. remove model_history.
  8. checkpoint bug fix.

Copy link
Collaborator

@liususan091219 liususan091219 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

User need to specify save_best_model_per_estimator in every test; redundant?

@sonichi
Copy link
Collaborator Author

sonichi commented Nov 15, 2021

User need to specify save_best_model_per_estimator in every test; redundant?

Not every test. Only if you need to save the best model per estimator.

the states can best_trial.get_best_checkpoint return a non-empty value.
"""
import torch
def evaluate(self, eval_dataset=None, ignore_keys=None, metric_key_prefix="eval"):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add typing?

flaml/automl.py Outdated
return self._model_history
self._settings = settings
settings["time_budget"] = settings.get("time_budget", 60)
settings["task"] = settings.get("task", "regression")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The default task is regression? Used to be classification.

flaml/automl.py Outdated
ensemble=None,
eval_method=None,
log_type=None,
save_best_model_per_estimator=None,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it necessary to change 'model_history' to save_best_model_per_estimator? Won't be compatible with code using older versions.

flaml/automl.py Outdated
self._n_concurrent_trials = n_concurrent_trials
self._early_stop = early_stop
self._use_ray = use_ray or n_concurrent_trials > 1
self._state.save_best_model_per_estimator = save_best_model_per_estimator
# use the following condition if we have an estimation of average_trial_time and average_trial_overhead
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Move these two commented lines to line #1861

@sonichi sonichi merged commit 72caa21 into main Nov 18, 2021
@sonichi sonichi deleted the history branch November 18, 2021 17:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants