-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Algorithm returns next samples to evaulate the objective function #9
Comments
Do you mean something like this: optimizer = BayesianOptimizer(initial_x, initial_f)
for _ in range:
x = optimizer.get_next_point()
optimizer.add_objective_value(f(x))
PF = optimizer.get_pareto_front() I guess that could be done. It would require some refactoring, but cleaning the code was something that I was meaning to do anyway. It would have to be a fork though, as I can't really change the interface in this repository because it is linked to the publication. |
Yes, you're right. In this way, we can directly plug in the framework where ever we want as it doesn't depend on the objective function directly. |
That actually makes more sense hehe..... I just never had to do that before, so it didn't occur to me. The tag already exists, it is just a matter of updating the readme file. In this case, can we add a tag to the wiki, or freeze a version of the documentation as well? As for the new features, I guess that the first step would be a major refactoring, so that the sampling of the next point and the proper optimization are appropriately separated. |
Yes, this is possible. You can create pages in wiki, each page containing different documentation. You can freeze a page with the publication little. Yes, you're right! Code refractor is needed and this would unlock greater potential of your library. I'm looking forward to it. 👍🏻 |
By the way, @ppgaluzio , do you plan to implement these anytime soon? |
Well, I will try to work on the refactoring as soon as possible, but I am pretty involved in other projects, so mostly during my free times. By how I am thinking to go with the refactoring, the actual implementation of the new feature would come as a consequence of it, so that the user can choose to use it either way. So, if the refactoring goes fast enough (I am thinking about a couple of weeks, if nothing new comes up), the new feature should be just a matter of calling the methods in the appropriate way. |
Awesome. If it's available quite early, I am planning to use this for my research work. I'll waiting for it. Thanks. 😄 |
Hello @ppgaluzio , any new update on this? :D |
I am working on it, but can't prioritize it right now. So I can't tell exactly when I will be able to finish it, unfortunately. |
sure. no problem. |
Hi @ppgaluzio, have you had a chance to look at this any more? This approach is highly relevant to my research (physical science) and I'd love to try and implement it. I saw another thread where application to discrete numerical/categorical spaces was proposed and this would also be ideal for my work. |
Hello @ppgaluzio , I have a suggestion!
Why don't we make a little changes in the current flow of the framework, in which, instead of accepting the objective function during the initialization, we get back a random initial sample to be evaluated by the objective function. During each iteration a new search samples are returned and we evaluate the objective function and feed the search samples and the evaluated results, and this in tern optimizes and returns the new sample to be evaluated.
Reason for the suggestion is that, we have to make the current framework as the main loop. Instead, if we take the approach I suggested, we can just plug in the framework anywhere we want which doesn't directly depend on the objective function.
Let me know your thoughts.
Thank you.
The text was updated successfully, but these errors were encountered: