-
Notifications
You must be signed in to change notification settings - Fork 295
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to set a seed for the acquisition function #1663
Comments
Is the function you're evaluating deterministic or is there noise? If the latter you shouldn't expect deterministic samples. @saitcakmak, @esantorella, what's the latest on making the BO generation deterministic (assuming a deterministic function)? I know we expose this but there have been some sources of nondeterminism in the past. |
Fixing the PyTorch seed should get you deterministic samples. In the benchmarking suite, we use Note that this will not guarantee determinism across hardware due to some non-determinism (e.g. different implementations of a given algorithm may be used depending on CPU type) in low-level PyTorch APIs. |
@Balandat, I am using a non-deterministic function (rosenbrock) @saitcakmak: worked like a charm. Instead of using a scheduler like in the benchmarking test, I just put the context manager above my loops and all worked. Amazing :) and thanks for pointing out the potential inconsistency across hardware. with manual_seed(seed=self.seed):
sobol = Models.SOBOL(search_space=experiment.search_space)
for i in range(n_init_evals):
generator_run = sobol.gen(n=1)
...
while i < self.n_max_evals:
model = Model(
experiment=self.ax_experiment,
data=self.ax_experiment.fetch_data(),
)
generator_run = model.gen(n=1)
... |
Yes, and even on the same machine and even with the seed set, not all of the PyTorch operations used by Ax are completely deterministic. The nondeterminism is typically minor -- only a change in the last digit of a float -- but in the context of a BO loop on a high-dimensional problem, the differences can compound substantially over repeated iterations. This happens because a very slight change in acquisition values can lead to a larger change in which arm appears to be the "best," which leads to different input data for the next iteration. Repeat that five or ten times, and the differences can become quite large. I've looked into this, but didn't have time to fully pin down the sources of nondeterminism. It might be possible to avoid the nondeterminism with torch.use_deterministic_algorithms, but I'm not sure if that would be desirable since the deterministic algorithms can be slower. |
Interesting. Thanks for addressing that. For me approximate deterministic behavior for a few iterations is fine to test my application, but it is good to keep this behavior in the back of my head. |
@flo-schu, it seems to me that your question was answered, so I'm closing this issue. Please feel free to follow up on it, but if you do, please reopen the issue as we might not see your comment on a closed one. |
I would like write a reproducible test of BO in Ax using a generation strategy with
Sobol
followed byGPEI
. I can easily set a seed for the sobol model, but doing so for the GPEI I could not manage. This results in a sequence of non-deterministic parameter samples.Is it possible to provide a Model with a seed for consecutive iterations over the markov chains?
I'd be happy to get some hints about where to look.
The text was updated successfully, but these errors were encountered: