Skip to content

Conversation

paraynaud
Copy link
Member

No description provided.

@codecov
Copy link

codecov bot commented Jan 5, 2023

Codecov Report

Base: 92.01% // Head: 92.04% // Increases project coverage by +0.02% 🎉

Coverage data is based on head (ee18570) compared to base (4ad1415).
Patch coverage: 100.00% of modified lines in pull request are covered.

Additional details and impacted files
@@            Coverage Diff             @@
##             main      #87      +/-   ##
==========================================
+ Coverage   92.01%   92.04%   +0.02%     
==========================================
  Files           6        6              
  Lines         839      842       +3     
==========================================
+ Hits          772      775       +3     
  Misses         67       67              
Impacted Files Coverage Δ
src/quasi-newton.jl 94.28% <100.00%> (+0.34%) ⬆️
src/NLPModelsModifiers.jl 100.00% <0.00%> (ø)

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report at Codecov.
📢 Do you have feedback about the report comment? Let us know in this issue.

@paraynaud paraynaud force-pushed the pr-count-neval_hprod branch from 3d9bc23 to 136364f Compare January 5, 2023 19:32
@github-actions
Copy link
Contributor

github-actions bot commented Jan 5, 2023

Package name latest stable
ADNLPModels.jl
AmplNLReader.jl
CUTEst.jl
CaNNOLeS.jl
DCI.jl
FletcherPenaltySolver.jl
JSOSolvers.jl
LLSModels.jl
NLPModelsIpopt.jl
NLPModelsJuMP.jl
NLPModelsTest.jl
Percival.jl
QuadraticModels.jl
SolverBenchmark.jl
SolverTools.jl

@github-actions
Copy link
Contributor

github-actions bot commented Jan 5, 2023

Package name latest stable
ADNLPModels.jl
AmplNLReader.jl
CUTEst.jl
CaNNOLeS.jl
DCI.jl
FletcherPenaltySolver.jl
JSOSolvers.jl
LLSModels.jl
NLPModelsIpopt.jl
NLPModelsJuMP.jl
NLPModelsTest.jl
Percival.jl
QuadraticModels.jl
SolverBenchmark.jl
SolverTools.jl

@paraynaud paraynaud requested review from dpo and tmigot January 5, 2023 21:01
@paraynaud
Copy link
Member Author

Following #86

Copy link
Member

@tmigot tmigot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@paraynaud The more I think of it the more it feels slightly strange than nlp.model counts hprod evaluation while no hessian-products was used.

Maybe, we should just add a separate Counters for the QuasiNewtonModel?
So that neval_hprod(nlp) would return 1, but neval_hprod(nlp.model)=0.

Co-authored-by: tmigot <tangi.migot@gmail.com>
@paraynaud
Copy link
Member Author

paraynaud commented Jan 6, 2023

@paraynaud The more I think of it the more it feels slightly strange than nlp.model counts hprod evaluation while no hessian-products was used.

I don't agree, since we use the quasi-Newton operator within hprod, it is consistent to count it with neval_hprod.
Otherwise, to keep it consistent, we have to make a method approxprod similar to hprod, but it will mess up the trunk implementation and probably the other optimization methods from JSOSolvers.

@github-actions
Copy link
Contributor

github-actions bot commented Jan 6, 2023

Package name latest stable
ADNLPModels.jl
AmplNLReader.jl
CUTEst.jl
CaNNOLeS.jl
DCI.jl
FletcherPenaltySolver.jl
JSOSolvers.jl
LLSModels.jl
NLPModelsIpopt.jl
NLPModelsJuMP.jl
NLPModelsTest.jl
Percival.jl
QuadraticModels.jl
SolverBenchmark.jl
SolverTools.jl

@tmigot
Copy link
Member

tmigot commented Jan 6, 2023

I am not sure to fully understand. Let's try to clarify, we have a

nlp = ADNLPModel(f, x0)
model = LBFGSSolver(nlp)

and we do

hprod!(model, x0)

With your PR currently

neval_hprod(nlp) == 1
neval_hprod(model) == 1

because the LBFGSModel counter is synced with the nlp (actually, model.counters doesn't exist).
I agree that neval_hprod(model) == 1 as the "hessian" of model was used.
However, I find neval_hprod(nlp) == 1 slightly confusing because no hessian of nlp has been used.

I think it would be accurate to have

neval_hprod(nlp) == 0 # because no hessian was used
neval_hprod(model) == 1

@paraynaud
Copy link
Member Author

I may have been confusing with LBFGS, which is a LBFGSModel used in trunk and not the inverse LBFGS linesearch from JSOSolvers.

I use:

nlp = ADNLPModel(f, x0)
model = LBFGSModel(nlp)
ges = trunk(model)
because the LBFGSModel counter is synced with the nlp (actually, model.counters doesn't exist).

correct!

I agree that neval_hprod(model) == 1 as the "hessian" of model was used.
However, I find neval_hprod(nlp) == 1 slightly confusing because no hessian of nlp has been used.

I did not think about it this way and I get your point.
I am not sure how to distinguish the two neval_hprod methods since counter is paired to nlp.

A suggestion: maybe we can link the neval_hprod method to the counter from model.op.counter to return the number of mul! performed by model.op, and delete the incrementation of nlp.counter.neval_hprod I add.
By doing that, neval_hprod(nlp) will stay at 0 all the time (as long a no hprod are performed directly on nlp).
What do you think about it?

@tmigot
Copy link
Member

tmigot commented Jan 7, 2023

Yes, that would work. If I understand correctly, you plan to do something like this:

neval_hprod(model::QuasiNewtonModel) = model.op.counter

?

Do not hesitate to add this as a unit test:

neval_hprod(nlp) == 0 # because no hessian was used
neval_hprod(model) == 1

@paraynaud
Copy link
Member Author

@tmigot I gave up writing it as

NLPModels.neval_hprod(nlp::QuasiNewtonModel) = nlp.op.nprod

so I wrote it as

NLPModels.neval_hprod(nlp::LBFGSModel) = nlp.op.nprod
NLPModels.neval_hprod(nlp::LSR1Model) = nlp.op.nprod

and it behaves accordingly to our discussion.
I made a slight distinction in the tests, because a LSR1Model makes a prod during the push! method, which is not the case of a LBFGSModel.

@github-actions
Copy link
Contributor

Package name latest stable
ADNLPModels.jl
AmplNLReader.jl
CUTEst.jl
CaNNOLeS.jl
DCI.jl
FletcherPenaltySolver.jl
JSOSolvers.jl
LLSModels.jl
NLPModelsIpopt.jl
NLPModelsJuMP.jl
NLPModelsTest.jl
Percival.jl
QuadraticModels.jl
SolverBenchmark.jl
SolverTools.jl

Copy link
Member

@tmigot tmigot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feared it was going a large PR, so good job for keeping it concise. Looks good for me, thanks!

@paraynaud paraynaud merged commit 7f01c57 into JuliaSmoothOptimizers:main Jan 10, 2023
@paraynaud paraynaud deleted the pr-count-neval_hprod branch January 10, 2023 13:46
@tmigot tmigot linked an issue Jan 10, 2023 that may be closed by this pull request
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

No incrementation of neval_hprod

2 participants