Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

trust-region optimization #161

Open
quattro opened this issue Feb 1, 2022 · 3 comments
Open

trust-region optimization #161

quattro opened this issue Feb 1, 2022 · 3 comments
Labels
enhancement New feature or request

Comments

@quattro
Copy link

quattro commented Feb 1, 2022

Hi all,

Thanks for the awesome library. It would be fantastic to eventually see some options for trust-region optimization. For smaller dimensional problems in my field (e.g., variance components, or their generalization), it provides a much more robust approach for inference compared with 2nd order counterparts.

@mblondel
Copy link
Collaborator

mblondel commented Feb 1, 2022

Indeed, trust-region methods would be nice to have but I didn't get the 'compared with 2nd order counterparts' comment because in my view trust region is mainly an alternative to line search. For instance, it would be great to have trustncg (Newton-CG + Trust region), which is an alternative to ncg (Newton-CG + Line search).

@quattro
Copy link
Author

quattro commented Feb 1, 2022

That's great to hear!

Re my comparison: I'm far from an expert, and my limited understanding was that Newton-CG + line search provides a direction first (Hessian- or approximate Hessian-modified gradient) and then tries to determine the magnitude using line search (or some fixed rate), whereas trust region first limits the magnitude based on the currently accepted trust radius, and then finds an appropriate direction which may or may not coincide with ncg (depending if optimal step is inside the ball). If things are convex then they should ultimately find the same optima, but for instances like variance components, which (I believe) are generally non-convex, trust-based approaches seem to be a bit better behaved.

@mblondel mblondel added the enhancement New feature or request label Feb 27, 2022
@deasmhumhna
Copy link

deasmhumhna commented Oct 24, 2022

This may be dead but through a minor object-oriented modification of OSQP you can support arbitrary convex spaces for which the support function as well as (projection onto) the recession and normal cones can be easily computed, which is most simple convex sets of interest (see Banjac et al. (2019) "Infeasibility Detection in the Alternating Direction Method
of Multipliers for Convex Optimization" and this useful blog post). Then the Newton-CG algorithm can just call this general OSQP to solve the subproblem to the desired accuracy. This method can also be used to solve quadratically constrained QPs quite easily as well, with $A_i^T A_i = P_i$, if $P_i$ is readily factorable or using a Chebyshev polynomial approximation of $f(P_i)=P_i^{1/2}$ for general implicit operators. I wrote an ADMM-based QP solver in Golang that I was using as part of several projects last year, though I lost the files around a month ago when my laptop was stolen. But a ConvexSet class shouldn't be too difficult to recreate.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants