Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

It search only small parameter area when using pruning. #270

Closed
higumachan opened this issue Dec 12, 2018 · 4 comments
Closed

It search only small parameter area when using pruning. #270

higumachan opened this issue Dec 12, 2018 · 4 comments

Comments

@higumachan
Copy link
Contributor

I encounted a problem of similar parameter on many trial when I'm using pruning.
I can not original code but I made similar case toy script bellow.

import optuna


count = 0

def objective(trial: optuna.Trial):
    global count
    a = trial.suggest_uniform('a', 0, 1)
    b = trial.suggest_uniform('b', 0, 1)

    print(a, b)

    if count > 10:
        raise optuna.structs.TrialPruned('pruned!')

    count += 1
    return 1

if __name__ == '__main__':
    study = optuna.create_study()
    study.optimize(objective, 100)

It running that output. (sorry)

0.9193297609499109 0.2951020724586255
[I 2018-12-12 22:41:26,449] Finished a trial resulted in value: 1.0. Current best value is 1.0 with parameters: {'a': 0.9193297609499109, 'b': 0.2951020724586255}.
0.07133285126371469 0.7913286482928995
[I 2018-12-12 22:41:26,660] Finished a trial resulted in value: 1.0. Current best value is 1.0 with parameters: {'a': 0.9193297609499109, 'b': 0.2951020724586255}.
0.18891975710874032 0.5979379868952668
[I 2018-12-12 22:41:26,870] Finished a trial resulted in value: 1.0. Current best value is 1.0 with parameters: {'a': 0.9193297609499109, 'b': 0.2951020724586255}.
0.4037803166887389 0.5556959131058304
[I 2018-12-12 22:41:27,081] Finished a trial resulted in value: 1.0. Current best value is 1.0 with parameters: {'a': 0.9193297609499109, 'b': 0.2951020724586255}.
0.5486918451302252 0.008951124018864665
[I 2018-12-12 22:41:27,290] Finished a trial resulted in value: 1.0. Current best value is 1.0 with parameters: {'a': 0.9193297609499109, 'b': 0.2951020724586255}.
0.22330875913552062 0.14336088783151546
[I 2018-12-12 22:41:27,497] Finished a trial resulted in value: 1.0. Current best value is 1.0 with parameters: {'a': 0.9193297609499109, 'b': 0.2951020724586255}.
0.2873568183345594 0.4439617700115277
[I 2018-12-12 22:41:27,708] Finished a trial resulted in value: 1.0. Current best value is 1.0 with parameters: {'a': 0.9193297609499109, 'b': 0.2951020724586255}.
0.5538432424257924 0.2169753830498714
[I 2018-12-12 22:41:27,919] Finished a trial resulted in value: 1.0. Current best value is 1.0 with parameters: {'a': 0.9193297609499109, 'b': 0.2951020724586255}.
0.5127323722154805 0.5340531117541805
[I 2018-12-12 22:41:28,129] Finished a trial resulted in value: 1.0. Current best value is 1.0 with parameters: {'a': 0.9193297609499109, 'b': 0.2951020724586255}.
0.7841730503441033 0.2835078605459713
[I 2018-12-12 22:41:28,336] Finished a trial resulted in value: 1.0. Current best value is 1.0 with parameters: {'a': 0.9193297609499109, 'b': 0.2951020724586255}.
0.9546588242819174 0.16202682451250522
[I 2018-12-12 22:41:28,543] Finished a trial resulted in value: 1.0. Current best value is 1.0 with parameters: {'a': 0.9193297609499109, 'b': 0.2951020724586255}.
0.9983748946513037 0.6555629703151846
[I 2018-12-12 22:41:28,754] Setting trial status as TrialState.PRUNED. pruned!
0.9927668704654466 0.7035276767714436
[I 2018-12-12 22:41:28,966] Setting trial status as TrialState.PRUNED. pruned!
0.9987580086217962 0.6628887019713245
[I 2018-12-12 22:41:29,180] Setting trial status as TrialState.PRUNED. pruned!
0.9284772067129109 0.6705110328580711
[I 2018-12-12 22:41:29,390] Setting trial status as TrialState.PRUNED. pruned!
0.9848210255379823 0.6671137737909589
[I 2018-12-12 22:41:29,603] Setting trial status as TrialState.PRUNED. pruned!
0.9953697388365024 0.6755573530081922
[I 2018-12-12 22:41:29,810] Setting trial status as TrialState.PRUNED. pruned!
0.9991738276818234 0.7173958962072531
[I 2018-12-12 22:41:30,023] Setting trial status as TrialState.PRUNED. pruned!
0.8991800344271141 0.6927683451172044
[I 2018-12-12 22:41:30,232] Setting trial status as TrialState.PRUNED. pruned!
0.9934512890815811 0.7088344944318075
[I 2018-12-12 22:41:30,444] Setting trial status as TrialState.PRUNED. pruned!
0.9903050248879492 0.689579918125413
[I 2018-12-12 22:41:30,656] Setting trial status as TrialState.PRUNED. pruned!
0.9163965495782573 0.6780475754902275
[I 2018-12-12 22:41:30,864] Setting trial status as TrialState.PRUNED. pruned!
0.9617450047338343 0.6806464718971055
[I 2018-12-12 22:41:31,077] Setting trial status as TrialState.PRUNED. pruned!
0.9832071113177652 0.7012388287383466
[I 2018-12-12 22:41:31,291] Setting trial status as TrialState.PRUNED. pruned!
0.9323896427950873 0.70081701699815
[I 2018-12-12 22:41:31,502] Setting trial status as TrialState.PRUNED. pruned!
0.9476696183631225 0.6904819670780827
[I 2018-12-12 22:41:31,714] Setting trial status as TrialState.PRUNED. pruned!
0.9283879624992482 0.7026143097522725
[I 2018-12-12 22:41:31,925] Setting trial status as TrialState.PRUNED. pruned!
0.9564054553207888 0.7236265488557467
[I 2018-12-12 22:41:32,138] Setting trial status as TrialState.PRUNED. pruned!
0.975684885384283 0.6761515105075317
[I 2018-12-12 22:41:32,350] Setting trial status as TrialState.PRUNED. pruned!
0.9745888444263756 0.709527987409857
[I 2018-12-12 22:41:32,563] Setting trial status as TrialState.PRUNED. pruned!

I think that it seems that the last one repeats the similar parameter.

Does anyone have insight on this problem?
Or Is this the expected behavior?

@higumachan
Copy link
Contributor Author

higumachan commented Dec 12, 2018

This code is patching to optuna/storages/base.py now.

    def get_trial_param_result_pairs(self, study_id, param_name):
        # type: (int, str) -> List[Tuple[float, float]]

        # Be careful: this method returns param values in internal representation
        all_trials = self.get_all_trials(study_id)

        return [
            (t.params_in_internal_repr[param_name], t.value)
            for t in all_trials
            if (t.value is not None and
                param_name in t.params and
                t.state in [structs.TrialState.COMPLETE, structs.TrialState.PRUNED])
            # TODO(Akiba): We also want to use pruned results
        ]

It works as I intended but I don't know this is good way.

@higumachan higumachan changed the title It search only small parameter area when started pruning. It search only small parameter area when using pruning. Dec 12, 2018
@sile
Copy link
Member

sile commented Dec 14, 2018

Thank you for reporting this issue.

This is a known problem within our team. The combination of TPE and pruning sometimes poorly works, as distribution update is trapped by the pruning mechanism. This might be improved a bit at the today's release (if you are interested in the details, please see #268 and #261).

Your patch would be a candidate of the solution, but it may cause some problems, e.g., the parameters suggested to the pruned trials are underestimated by TPE. What is the best solution is under consideration, and we will address it in the future.

For a workaround, you can also try RandomSampler when your task is suffered by this problem.

@higumachan
Copy link
Contributor Author

@sile

Thank you for response.

I checked again with new version, It is improved this issue.

Thank you.

@sile
Copy link
Member

sile commented Jul 19, 2019

@higumachan FYI This problem was completely solved by #439 (the next release will include this patch).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants