New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using Ax as a supplier of candidates for black box evaluation #120
Comments
Hello, @avimit! May I ask, just as a clarification, why the Service API does not work for you? That is the API we generally intended for a use case where trials are evaluated externally and then the data is logged back to Ax, so overall it seems that the Service API should be the right fit. If the issue is the need to pass custom models, that is possible, by passing a |
I haven't tried the Service API before this week; I had assumed (wrongly?) that the developer API is the one for me. No, the issue was never custom models. The issue was loading known+pending evaluations to a model, and getting next candidates for evaluation. And doing so externally, without stating the evaluation function. I will have a look at this example. Thanks! |
@avimit , the service API definitely supports the asynchronous evaluation with proper handling of pending points. Please let us know if this works for you, and if you have any suggestions for how we could make this functionality clearer in the docs (I can see how calling the Developer API the "Developer API" is a little confusing, since all developers might think it's the API for them ;). Re: your query about |
@lena-kashtelyan, @eytan, thank you again for the generous responsiveness 🙏🏼 There is one obstacle which still stops me from managing a full run with the service API: I still haven't found how to load known (= with results) and pending evaluations to the Ax client. Another question: I was happy to read in the documentation that when not setting a generation strategy "one is intelligently chosen based on properties of search space" - this could be very handy. Two questions:
|
@avimit, hello again! Re: loading known and pending evaluations –– by evaluations here, do you mean trial evaluations? A pending trial would be one, the parameterization which which has been suggested by the
|
@lena-kashtelyan, thanks! No, I mean pre-existing results, known beforehand, before using Ax; initiations from outside. ..and on top of that, as you write a "pending trial, the parameterization which which has been suggested by the AxClient, but evaluation data has not yet been logged back for" And also, how do I report results of AxClient suggested evaluations back to Ax? |
@lena-kashtelyan, hi, I did mange to load results to the client; not sure if it's the correct way. I will describe, and then ask a few questions: Description:
Questions:
|
@avimit, hello again, Re: description of your actions sounds exactly right –– that is how the Service API is meant to be used.
|
@lena-kashtelyan, good to hear from you again,
These are the parameters with which I create the experiment:
|
There are only 3 of them, and they look like this:
```
X_init:
[[10. 4.78911683]
[-5. 15. ]
[ 9.03619777 3.18465702]]
Y_init:
[[ -5.13350902]
[-17.5083 ]
[ -2.14991914]]
```
Strange, I see an email here, but your reply in not in the issue discussion.
Thanks,
Avi
…On Fri, 12 Jul 2019 at 18:56, Lena Kashtelyan ***@***.***> wrote:
@avimit <https://github.com/avimit>, I will get back to you on 2), but
just so that I may look a bit more into 3), may I ask what the objective
values were for the 5 trials you did complete before generating the
subsequent ones?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#120?email_source=notifications&email_token=ACZJQJBOWM5YHIUCUUEFH2LP7CSUPA5CNFSM4H6VVORKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODZ2FEBA#issuecomment-510939652>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ACZJQJFMLROK6Q4T6VVFGJLP7CSUPANCNFSM4H6VVORA>
.
|
I tried to generate 10 trials, as you can see above, after supplying 3 init points. |
@avimit, hello again –– My apologies, I deleted my comment after realizing I did not need the points, since I was able to reproduce the issue on my own. Thank you very much for your patience and cooperation.
|
@lena-kashtelyan, thank you! |
@avimit, thank you for your feedback and your patience! I will keep you posted as the changed are merged onto master. |
@avimit, the fix for the service API bug should now be on master, and the trials it's generating for you should look more reasonable. Also, regarding the fact that it will not generate more trials after the first 5, if you need more trials in parallel in the beginning, check out this section of the Service API tutorial. At the end, there is an explanation of the flag you can use. For the SEM being set to 0, I will update you when that behavior is fixed! Thank you, again, for pointing out! |
@lena-kashtelyan, I now installed version 0.1.3 (ax-platform-0.1.3): still getting these strange trials after number 5. ...Ah, I see: the fix is on master, but not yet on a release... I now installed master, and the trials are indeed more diverse:
Still, they are different in nature compared to the first 5... Thanks! |
@avimit, I reproduced your case locally (just to verify that the behavior will be the same), started by attaching the three initial trials you provided:
Then, I generated 15 more trials without completing any more than the initial three:
And these are the parameterizations that were generated:
The GP-generated trials are quite similar to yours, for me. And they are indeed different in nature from the quasi-randomly generated ones; this is because they are generated through GP + EI. Those points are chosen so as to target areas with a mixture of high uncertainty and good objective function value, whereas the quasi-random points are purely exploratory. Let me know if this helps! |
@lena-kashtelyan, thank you! So you are saying that this is normal behaviour. I need to read your documentation more closely: wasn't aware of the automatic shift between generation methods. I read here that "generation_strategy – Optional generation strategy. If not set, one is intelligently chosen based on properties of search space." Can I read somewhere about the rules of the intelligent choice making? Does defining a Update: I ran now with many more prior results (23, instead of 3), and the generated trials (20 of them) do look more 'random':
I also rolled back to version 0.1.3 and got strange repetitive trials again, so the bug-fix in master really fixed a bug 👍🏻 BTW, I noticed that when (accidentally) providing too many duplicate initial points, Ax crushes with |
@avimit, it is normal behavior indeed! I was just making notes on how we need to expand the documentation on To set the generation strategy manually, you will indeed need to make one, with the Just out of curiosity –– are you looking to specify a generation strategy of your own for research / experimental purposes? Regarding the runtime error, is it coming from |
@lena-kashtelyan Yes, I may wish to specify a generation strategy of my own, but at a later stage, not right away. I will try to reproduce the runtime error and update. Thanks Update: I reproduced it, and sorry, it is an error when running BoTorch, not Ax ... I am testing several packages in a sequence (GPyOpt, BoBorch, Ax), and didn't notice that it failed before the Ax section. Indeed, as you had guessed, the error is coming from If you are still interested, although it's not an Ax error, this is the error message tail:
|
@avimit, sounds good! Please feel free to open up an issue regarding adding a custom generation strategy if you need help with it. I will pass the error on to the BoTorch folks –– I think it's something they've dealt with before. Is it blocking for you at all? Thank you for reporting it! |
@lena-kashtelyan Not blocking, but it does demand consideration: I would have to de-duplicate init points (sometimes we collect results from several previous runs, and it can be that some of them used the same "grid" initiation, and so share X points) |
@avimit, I just heard back from @Balandat regarding this issue, and it seems like it would be helpful to have a slightly more elaborate repro: next time you see the errors, could you record what the trials and the rest of the data were? If that helps, you should be able to get the data via |
@lena-kashtelyan, I noticed that you don't yet have a tag with the above bug fix (the last one is still 0.1.3), |
@avimit, we're getting a release ready today / tomorrow! |
@lena-kashtelyan, Hi, I read that v0.1.4 Release is broken, and not to be used. |
@avimit, it's out today! With it comes the fix for this bug. Closing the issue for now; feel free to reopen if something seems unsolved still, and thank you so much again for you patience and feedback. Edit: both bugs discussed in this issue actually –– assumption that SEM is 0 and generating similar trials because the existing ones have not yet been updated with data. |
Hi, Thanks again |
Hi,
I have been trying, in resent days, to use Ax for my task.
The use case: supplying X new candidates for evaluation, given known+pending evaluations. Our "evaluation" is a training & testing of an ML model done on a cloud sever. I just want to feed the results to the BO model, and get new points for evaluation = to have Ax power our HPO. No success yet.
In BoTorch, I achieved this goal, with these 5 lines at the core:
I've been trying to use BotorchModel via the developer API. Questions:
Have I been looking at the wrong place? Should I have been using the service API (loosing some flexibility)?
Could you please direct me to relevant examples in both APIs?
(One of my main reasons for shifting to Ax, is that I want in the future to optimize over a mixed domain: some parameters continuous, and some discrete; but this is a different question...)
Thanks a lot,
Avi
The text was updated successfully, but these errors were encountered: