Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NLopt improvements #72

Open
1 of 6 tasks
bluescarni opened this issue Apr 8, 2017 · 0 comments
Open
1 of 6 tasks

NLopt improvements #72

bluescarni opened this issue Apr 8, 2017 · 0 comments

Comments

@bluescarni
Copy link
Member

bluescarni commented Apr 8, 2017

#67 introduces a first iteration of the NLopt wrappers in pagmo. Random dump of possible improvements:

  • implement support for the augmented Lagrangian algorithms NLOPT_AUGLAG and NLOPT_AUGLAG_EQ (Nlopt auglag + missing serialization #75)
  • in addition to being able to set up the algo's parameters after construction, we should probably provide a kwargs ctor in python to set up the parameters upon construction (for consistency with other algos)
  • implement missing algorithm configuration options, such as nlopt_set_xtol_abs() (absolute tolerance on parameters with a vector of tolerances rather than a single tolerance value valid for all components), nlopt_set_initial_step() (initial step size for derivative-free algorithms), nlopt_set_vector_storage() (vector storage for limited-memory quasi-Newton algorithms)
  • add support for the global optimisation algorithms (this essentially just requires to add a handful of config options which are not yet exposed, because they are meaningful only for the global opt algos)
  • add support for hessians preconditioning (still experimental in NLopt)
  • implement a cache for avoiding repeated calls to problem::fitness(). pagmo computes objfun and contraints in a single call to problem::fitness(), but NLopt (and, presumably, other local optimisation libraries) separate the computation of objfun and constraints in different functions. This means that our local optimisation wrappers might end up calling fitness() repeatedly with the same decision vector. The idea is then to code a cache that remembers the result of the last N calls to fitness() (and maybe gradient() as well?), in order to avoid wasting cpu cycles.

See http://ab-initio.mit.edu/wiki/index.php/NLopt_Reference

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant