Skip to content
This repository has been archived by the owner on Nov 23, 2018. It is now read-only.

Linesearchers should check function value before asking for gradient #127

Closed
btracey opened this issue Sep 24, 2015 · 8 comments
Closed

Comments

@btracey
Copy link
Member

btracey commented Sep 24, 2015

Many problems have a gradient that's expensive to compute. The line searchers we implement should check that the function value has decreased before asking for the gradient.

@vladimir-ch
Copy link
Member

Backtracking returns only FuncEvaluation, but at present Bisection needs both at the same time. Splitting it will require some changes and perhaps a bit of thought. If the step satisfies Armijo, then it is easy, just return GradEvaluation. If not, then it is not so clear. If maxStep has not been found yet, then probably the right thing is to halve the step and try again with another FuncEvaluation. If the step has already been bounded, then ... ?

Anyway, when this is done, I will be this close from suggesting to drop the ORability of evaluation operations. Just saying (although I am afraid it was me who came up with the idea).

@vladimir-ch
Copy link
Member

I tried to come up with something for Bisection that would actually save some work if a step does not satisfy the Armijo condition. But needing both values simultaneously is essential for the algorithm, so either we don't save work or it will be a different algorithm. Do you see another way, @btracey ?

If we just want to issue FuncEvaluation and GradEvalaution separately, it can be done easily, though.

@btracey
Copy link
Member Author

btracey commented Sep 27, 2015

That's exactly what I have in mind. Request function value. If it passes, request gradient. If not, update

@vladimir-ch
Copy link
Member

But update needs gradient too, so we don't save anything.

@vladimir-ch
Copy link
Member

Continuing the previous discussion from closed #137:

I do think that our line searches should evaluate individually (for example, so gradient computations can be saved).

... One was to see if gradient computations can be saved if we know that the point does not satisfy even Armijo. It is obvious to me now that they cannot, the algorithm simply needs the derivative at the midpoint to decide which side to subdivide. We could modify the algorithm but then it would not be the same thing, it is not clear how to modify it and whether it is worth it. If we modified the bracketing phase not to use g, only f, then we could save them during that phase because the action does not need g, it is always: go forward.

... but Backtracking does not ask for the gradient and Bisection cannot do without it.

That's not true on Bisection. If the function value is higher, then we always know which direction to push. If the function value is lower then the gradient is needed to decide which way to slide.

The very first check in Bisection is g < 0 so as it is now we always need the derivative. We can remove it, though, and make the bracketing phase derivative-free (slide forward until the midpoint has lower value than the two end points). Perhaps we should. Also, shouldn't we fail earlier than when b.step == step? Just asking.

@btracey
Copy link
Member Author

btracey commented Sep 29, 2015

First, let's say the initial bracketing phase. There are four cases

  1. Decrease in f, g < 0
  2. Decrease in f , g > 0
  3. Increase in f, g < 0
  4. Increase in f, g > 0

Both cases 3 and 4 mean that a minimum has been bracketed, as the function value has increased. So there we do not need to check the gradient.

When it is bracketed and we look at a new point, again, if the function value is higher we will always cut toward the best minimum. It is only when the function value is lower that the gradient value is needed.

@vladimir-ch
Copy link
Member

Both cases 3 and 4 mean that a minimum has been bracketed, as the function value has increased. So there we do not need to check the gradient.

Yes. But currently the first test in Bisection during bracketing is g > 0. To do what you are aiming at the first test must be f >= b.minF (or f > b.f if we are updating all three points while sliding forward).

When it is bracketed and we look at a new point, again, if the function value is higher we will always cut toward the best minimum. It is only when the function value is lower that the gradient value is needed.

Oh, I have just realized what was not fitting in and was thus confusing me: the current code stores the derivative in b.minGrad and b.maxGrad so it makes it seem as if those derivates are needed. But in fact they are not! They are just stored and never used, so yes, we can save gradient evaluations in some cases. Uf.

Do you want to put this together or should I take a stab at it?

@vladimir-ch
Copy link
Member

Fixed by #143

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants