You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
the 'results' object should contain information about each of the 10 trials:
which hyperparameter config was tried, the corresponding reward, other various metadata about the trial.
I would use the following naming conventions for the returned object: results = task.fit():
results.model = best MXNet model found in task.fit() (same as current)
results.time = run-time of fit (same as current)
results.validation_performance = validation performance achieved by this model (what is currently called 'reward')
results.train_objective = training loss value achieved by this model on the training data
results.selected_hyperparameters = hyperparameter values corresponding to this model (what is currently called 'config')
results.metadata = dict containing the following keys:
search_space = hyperparameter search space considered in task.fit() (currently unnamed)
search_strategy = HPO algorithm used (ie. Hyperband, random, BayesOpt). If the HPO algorithm used kwargs, then this should be tuple (HPO_algorithm_string, HPO_kwargs)
num_completed_trials = number of trials completed during task.fit
results.metadata can contain other optional keys such as:
latency = inference-time of this model (time for feedforward pass)
memory = amount of memory required by this model
worst_errors = list of K validation examples where model made the worst errors (ie. lowest-probability of correct class in the case of classification)
results.trial_info = list of dicts (length = number of trials).
results.trial_info[i] is dict containing the following keys:
config = hyperparameter configuration tried in ith trial
validation_performance = validation performance of the corresponding model in ith trial
train_objective = training loss value achieved by ith trial's model on the training data
metadata = dict of various optional metadata with keys such as:
early_stopped = whether or not this trial was early stopped or not in Hyperband
latency = inference-time of the model from this trial
memory = amount of memory required by model from this trial
The text was updated successfully, but these errors were encountered:
After I call results = task.fit(num_trials=10),
the 'results' object should contain information about each of the 10 trials:
which hyperparameter config was tried, the corresponding reward, other various metadata about the trial.
I would use the following naming conventions for the returned object:
results = task.fit()
:The text was updated successfully, but these errors were encountered: