Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OEP: more general state for user-supplied functions #306

Closed
timholy opened this issue Nov 14, 2016 · 3 comments
Closed

OEP: more general state for user-supplied functions #306

timholy opened this issue Nov 14, 2016 · 3 comments

Comments

@timholy
Copy link
Contributor

timholy commented Nov 14, 2016

It would be lovely to be able to support sparse Hessians, Jacobians (for constrained optimization), etc. It would be good to move to an API where one can supply HessState and JacState objects of any kind. The user-supplied functions would update the state at the current point, and then such state objects would have to support J*x and H\x type of operations. Obviously H could be a dense n-by-n matrix, but it wouldn't have to be.

@pkofod
Copy link
Member

pkofod commented Dec 16, 2016

I will try to fit this in when I have a shot at the dispatch based function interface. I intend to target this after #304 , as quite a few open issues are related to this.

@pkofod
Copy link
Member

pkofod commented Jan 23, 2017

Currently #337 simply opens up the fields for storing gradients and hessians. This means that if you use the full constructor (with all the fields), you can input your own gradient type as storage as long as you've implemented

copy
copy!
getindex
vecdot(s::Array, g::G)

for your type G. For the Hessian you can in principle do the same, but currently you'll run into a wall when it gets passed to the NewtonTrustRegion solver, and in Newton you'd need to support a method that can be passed to PositiveFactorizations. I do intend to make a switch, such that you can simply do good old renegade Newton steps: -H\g. Then you just need your state to be supported by a \ method.

I'm going to leave Newton alone for now, but I will try to make some stupid examples with the gradient, such as

type MyStupidGradient{T}
    g::Vector{T}
end

just to make sure it works all the way through. Some changes might have to be made over at LineSearches.jl, but I will find out :)

@pkofod
Copy link
Member

pkofod commented Mar 13, 2017

Should be possible with #337. If you create a OnceDifferentiable "manually" that is you also provide the g yourself, then this is possible. (the gradient storage is parametric now, as is the Hessian)

@pkofod pkofod closed this as completed Mar 13, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants