Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluating custom candidates #2302

Closed
git627 opened this issue Mar 25, 2024 · 2 comments
Closed

Evaluating custom candidates #2302

git627 opened this issue Mar 25, 2024 · 2 comments

Comments

@git627
Copy link

git627 commented Mar 25, 2024

After creating my generation strategy and experiment, I'm able to receive the next recommended parameter values via get_next_trial. I've been trying to understand how candidates are being generated under the hood, and from my search it appears to use optimize_acqf from BoTorch (although correct me if I'm wrong). My question is: let's say I have some experimental design with predetermined candidates, and I want to evaluate only those candidates for potential selection. In BoTorch this is as simple as building an acquisition function over a trained model or models, and then evaluating the acquisition function over all the candidates. However, I'm having trouble finding a way to do this in the Ax framework. Is this currently supported? Thanks for your help.

@git627
Copy link
Author

git627 commented Mar 25, 2024

After seeing another issue posted here, I realized that the Torch Model Bridge has an evaluate_acquisition_function method, so my question is answered now. Thanks again

@git627
Copy link
Author

git627 commented Mar 25, 2024

Issue closed

@git627 git627 closed this as completed Mar 25, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant