Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Results object returned by task.fit() no longer contains info about each config tried and their reward #32

Closed
jwmueller opened this issue Oct 14, 2019 · 2 comments
Labels
API & Doc Improvements or additions to documentation enhancement New feature or request

Comments

@jwmueller
Copy link
Contributor

jwmueller commented Oct 14, 2019

After I call results = task.fit(num_trials=10),

the 'results' object should contain information about each of the 10 trials:
which hyperparameter config was tried, the corresponding reward, other various metadata about the trial.

I would use the following naming conventions for the returned object: results = task.fit():

results.model = best MXNet model found in task.fit()  (same as current)

results.time = run-time of fit  (same as current)

results.validation_performance = validation performance achieved by this model (what is currently called 'reward')

results.train_objective = training loss value achieved by this model on the training data

results.selected_hyperparameters = hyperparameter values corresponding to this model (what is currently called 'config')

results.metadata = dict containing the following keys:
    search_space = hyperparameter search space considered in task.fit() (currently unnamed)
    search_strategy = HPO algorithm used (ie. Hyperband, random, BayesOpt). If the HPO algorithm used kwargs, then this should be tuple (HPO_algorithm_string, HPO_kwargs)
    num_completed_trials = number of trials completed during task.fit

    results.metadata can contain other optional keys such as:
    latency = inference-time of this model (time for feedforward pass)
    memory = amount of memory required by this model 
    worst_errors = list of K validation examples where model made the worst errors (ie. lowest-probability of correct class in the case of classification)


results.trial_info = list of dicts (length = number of trials).
    results.trial_info[i] is dict containing the following keys:  
        config = hyperparameter configuration tried in ith trial
        validation_performance = validation performance of the corresponding model in ith trial
        train_objective =  training loss value achieved by ith trial's model on the training data
        metadata = dict of various optional metadata with keys such as:
            early_stopped = whether or not this trial was early stopped or not in Hyperband
            latency = inference-time of the model from this trial
            memory = amount of memory required by model from this trial 
@cgraywang
Copy link
Contributor

Originally it was in the metadata.

@zhanghang1989 zhanghang1989 added API & Doc Improvements or additions to documentation enhancement New feature or request labels Oct 16, 2019
@zhanghang1989
Copy link
Contributor

We can move further discusssion in #29

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
API & Doc Improvements or additions to documentation enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants