Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to generate same challengers in every runs? #467

Closed
qqliang opened this issue Dec 11, 2018 · 16 comments
Closed

How to generate same challengers in every runs? #467

qqliang opened this issue Dec 11, 2018 · 16 comments

Comments

@qqliang
Copy link

qqliang commented Dec 11, 2018

As title, I wanna get the same result in different runs, how can I do? does it can generate same challengers in every runs but different in each iterations?

@mlindauer
Copy link
Contributor

Dear qqliang,

Thank you for your interest in SMAC.
Sorry, I don't understand what you want to do. Maybe you could explain your use-case a bit more?

Cheers,
Marius

@qqliang
Copy link
Author

qqliang commented Dec 11, 2018

smac uses random search or/and local search to generate challengers, how to generate different challengers in each iterations (on iteration include fit model, select configurations and intensifier), but same in each runs (one run is perform smac on dataset). I wanna get the same result in every run.

@mlindauer
Copy link
Contributor

So, you want that SMAC is deterministic? (I wonder why?)
Unfortunately, it's not so easy to have a perfectly deterministic SMAC. At first, we would need that the function you optimize is deterministic. If the function is stochastic (e.g., training a DNN), there is no way that SMAC will be deterministic.
If your function is deterministic, you could try to use the fmin interface of SMAC. It's not guaranteed that SMAC will always be deterministic, but it is as good as it can be (at the moment).

Cheers,
Marius

@qqliang
Copy link
Author

qqliang commented Dec 11, 2018

the reason is that I perform smac with same params on a dataset many time, but I get a quite different result. so I wander how to control it to get same result.

@mlindauer
Copy link
Contributor

There is a good reason why SMAC is non-deterministic. It simply increases the chance that you achieve better performance in at least one of your SMAC runs.

@qqliang
Copy link
Author

qqliang commented Dec 11, 2018

ok, I got it, thank you very much for your reply

@qqliang
Copy link
Author

qqliang commented Dec 11, 2018

other, I wanna know the probability of uncertainty, Is it predictable?

@mlindauer
Copy link
Contributor

uncertainty of what?

@qqliang
Copy link
Author

qqliang commented Dec 11, 2018

the change of achieve better performance than traditional ways.

@mlindauer
Copy link
Contributor

what are traditional ways for you?

@qqliang
Copy link
Author

qqliang commented Dec 11, 2018

random search.

@mlindauer
Copy link
Contributor

you want to know the probability (of the uncertainty) of performing better than random search? I would guess that SMAC could estimate that in each iteration in expectation of what random search could achieve, but that's not implemented.

@qqliang
Copy link
Author

qqliang commented Dec 12, 2018

yes, that what I wanna know. But I don't know how to estimate it.

@qqliang
Copy link
Author

qqliang commented Dec 13, 2018

Dear mlindauer, I find a argument 'deterministic' in Scenario, which means "If true, the optimization process will be repeatable.".
What is the effect of setting this parameter to true? I will get the same challengers during each iteration? or the same configuration will to run.

@mlindauer
Copy link
Contributor

Thanks for pointing that out. Unfortunately, that's a bug in the documentation. I created a separate issue such that we will fix it in the next release (#468)

@qqliang
Copy link
Author

qqliang commented Dec 13, 2018

I have an other question for Intensification mechanism. When the performance of challenger is equal to incumbent and challenger has at least as many runs as incumbent, then trust challenger can be better than incumbent. why needs so much runs in same configuration? it will be take the time of Intensification, why not just choose one performs better than incumbent?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants