Skip to content

Conversation

@odow
Copy link
Member

@odow odow commented Apr 18, 2022

ANN: upcoming refactoring of JuMP's nonlinear API

tl;dr: The next feature release of JuMP (probably v1.2.0) introduces a large
refactoring of JuMP's nonlinear API. If you have code that accessed private
features of the nonlinear API, such as JuMP._Derivatives or model.nlp_data,
your code will break. If you used only the public, documented API such as
register, @NLconstraint and num_nonlinear_constraints, this change does
not affect you. To try the uncoming release, use
import Pkg; Pkg.pkg"add JuMP#od/moi-nonlinear".

The relevant pull request is #2955

What are we doing?

Over the last few months, we have been refactoring how JuMP supports nonlinear
programs. This involved moving and re-organizing a large amount of code from
JuMP into MathOptInterface.

The result is the new MOI.Nonlinear submodule in MathOptInterface, with a
documented public API for creating and dealing with nonlinear programs. Read
more about it here: https://jump.dev/MathOptInterface.jl/stable/submodules/Nonlinear/overview/

However, as part of this work we are removing code from JuMP. This code was
internal, undocumented, and not intended for public use. Most of it was
contained in the JuMP._Derivatives submodule, but we also made changes such
as removing model.nlp_data.

Why did we do this?

The nonlinear code in JuMP was a clear example of technical debt. It was
complicated, convoluted, and largely undocumented. People wanting to extend JuMP
for nonlinear programs were forced to use a range of hacks that relied on
undocumented internals.

The new MOI.Nonlinear submodule offers a stable, documented, and public API
for people to build JuMP extensions on. It also enables new features like
swappable automatic differentiation backends, and hessians of user-defined
functions.

We originally considered that any change in the nonlinear API would be a
breaking v2.0.0 release of JuMP and occur at least two years after the release
of JuMP 1.0. However, we implemented the changes quicker than expected, and were
able to do so in a way that does not break the public nonlinear API. Therefore,
we elected to classify this as a non-breaking feature release.

Does it affect me?

If you have any code that called private features of the JuMP nonlinear API,
such as JuMP._Derivatives or model.nlp_data, your code will break.

If you used only the public, documented API such as register, @NLconstraint
and num_nonlinear_constraints, this does not affect you.

What are the next steps

Try the uncoming release as follows:
import Pkg; Pkg.pkg"add JuMP#od/moi-nonlinear".

If you find any bugs or changes in performance, please post below, or open a
GitHub issue. Once we're happy that there are no issues with the changes, we
will release a new version of JuMP with the changes.

@odow odow added the Category: Nonlinear Related to nonlinear programming label Apr 18, 2022
@codecov
Copy link

codecov bot commented Apr 18, 2022

Codecov Report

Merging #2955 (cf7d5d8) into master (124ba75) will increase coverage by 0.67%.
The diff coverage is 97.28%.

@@            Coverage Diff             @@
##           master    #2955      +/-   ##
==========================================
+ Coverage   95.44%   96.12%   +0.67%     
==========================================
  Files          43       32      -11     
  Lines        5824     4130    -1694     
==========================================
- Hits         5559     3970    -1589     
+ Misses        265      160     -105     
Impacted Files Coverage Δ
src/operators.jl 96.13% <0.00%> (ø)
src/precompile.jl 0.00% <ø> (ø)
src/macros.jl 96.28% <94.17%> (-0.14%) ⬇️
src/nlp.jl 98.80% <98.70%> (+4.21%) ⬆️
src/JuMP.jl 95.66% <100.00%> (ø)
src/copy.jl 95.52% <100.00%> (+0.06%) ⬆️
src/feasibility_checker.jl 100.00% <100.00%> (ø)
src/objective.jl 96.07% <100.00%> (+1.96%) ⬆️
src/optimizer_interface.jl 82.00% <100.00%> (ø)
src/print.jl 98.63% <100.00%> (+0.09%) ⬆️
... and 2 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 124ba75...cf7d5d8. Read the comment docs.

@odow odow force-pushed the od/moi-nonlinear branch 2 times, most recently from f4a9fe9 to b12f22e Compare April 22, 2022 02:51
@odow odow force-pushed the od/moi-nonlinear branch from 5a3c234 to 185f983 Compare May 3, 2022 03:03
@odow odow force-pushed the od/moi-nonlinear branch from f2d3c02 to f9e9ff9 Compare May 29, 2022 23:50
@odow odow changed the title WIP: refactor to use MOI.Nonlinear Refactor to use MOI.Nonlinear May 30, 2022
@odow
Copy link
Member Author

odow commented Jun 9, 2022

What are the concrete next steps for merging this into JuMP? It's a bit of a weird PR, because on one hand, it changes a lot of code, but on the other hand, very little actually changes at the user-facing level (just a couple of errors move from compile time to run time, and some constants change from 0 to 0.0).

I think at minimum, we need the following reviews:

Optional

Copy link
Contributor

@frapac frapac left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR is a net improvement imo. The code is a lot cleaner now the nonlinear backend is factorized on the MOI side. I have tested this branch on my custom nonlinear models, and JuMP works like a charm on them, returning the same results as before.

I only have a few minor comments (I don't have the expertise to assess the correctness of the macro in JuMP).

Also, one remaining TODO on the JuMP side was the support of incremental solves for nonlinear model:
https://github.com/jump-dev/JuMP.jl/blob/od/moi-nonlinear/src/optimizer_interface.jl#L156

I am wondering how difficult it would be to support incremental nonlinear solves with the new nonlinear interface? E.g. to add new nonlinear constraints MOI.Nonlinear.Model between two consecutive solves.

Copy link
Member

@mlubin mlubin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have you run end-to-end benchmarks? Maybe this was already done, but I remember at some point there were performance TODOs in MOI.Nonlinear.

@odow
Copy link
Member Author

odow commented Jun 9, 2022

Also, one remaining TODO on the JuMP side was the support of incremental solves for nonlinear model:

That's my plan once this is merged, but it should be a separate PR.

@odow
Copy link
Member Author

odow commented Jun 9, 2022

I remember at some point there were performance TODOs in MOI.Nonlinear.

I reverted the performance TODOs in MOI.Nonlinear, so now the only change is moving some of the expression generation from compile time in the macros to runtime. I can make some more benchmarks.

@ccoffrin
Copy link
Contributor

ccoffrin commented Jun 10, 2022

I reviewed the code changes and nothing stood out to me as problematic. But I'll also confess that the bulk the details are beyond my level of expertise.

I would suggest that any feature adds, which run the risk of generating a desire for possible future breaking changes be discussed at the JuMP developer call for concurrence across the wider group (if this has not already been done).

The only two points that I saw that would fall into this category were:

  • API for setting AD backends
  • API for setting user-defined Hessians

One option to keep feature progress without breaking changes would be to make these new features accessible only from internal functions for a time. Then move them to exported functions after advanced users have had a chance to use them for several months?

odow added 11 commits June 13, 2022 11:08
Updates

More updates with beginnings of auto-registration

Add back printing

Simplify parsing macro

Re-add support for generators

Support . field access

More updates

More updates

More fixes

Updates to the docs

More fixes for the docs

Documentation fix

More fixes

Lots more fixes

Fix format

More fixes

Fix formatting and re-org

Coloring tests

More tests

Fix tests

More test fixes

More fixes

More printing fixes

F

Implement value

Fix test failure

Fix broken tests

Revert change to nonlinear constraint expressions

Update

Fix tests

Another fix

Fix test

Another fix

Start on documentation

Reenable precompile

Various updates and improvements

Add more docs

Fix docs

Add docstrings

Variety of docs and other improvements

Fix typo

More updates

Fix formatting

More updates

Remove utils.jl

Tidy Coloring

Fix typo

Improve test coverage

Remove unneeded type

Union is called in docs

Add back NaNMath
Misc fixes

Fix formatting

More tests

Fix test

Fix formatting

Update for latest changes

Fix typo

Fix formatting

Fix typo

Latest updates

Updates from latest MOI changes

Update for latest changes

Fix tests

More fixes
@odow odow force-pushed the od/moi-nonlinear branch from 516f8e3 to a63c9e8 Compare June 12, 2022 23:12
@odow
Copy link
Member Author

odow commented Jun 20, 2022

I've opened issues in the packages that will break. I found them by searching Julia hub for things like _Derivatives and .nlconstr and .nlexpr.

A few, like NLPModelsJuMP, are easy fixes to use only the public API. But others like EAGO are pretty much dead in the water because they heavily used a lot of the stuff in _Derivatives and other _private methods.

Some of the others are just for printing (MPSGE), which I guess going without isn't too much of a penalty. Others like Complementarity and DisjunctiveProgramming (ab)use the API to do things like delete nonlinear constraints and modify nonlinear expressions, so I don't know if we have a safe work-around.

The main takeaway from this that I will push a commit to shortly, is to overload model.nlp_data with nice error message, so that anyone using a package which accesses the internal API can get something informative (like telling them to install JuMP v1.1).

@odow
Copy link
Member Author

odow commented Jun 21, 2022

I took a look at some of the packages:

  • Plasmo and EAGO are hard because they touch a lot of things. @jalving is taking a look at Plasmo.

  • Complementarity: The "simple" fix is just deleting nonlinear constraints, but a deeper look suggests we could do better by fixing the root cause. It does a number of questionable things, like calling MOI.initialize every time within the callback, mucking with macros, and adding new constraints.

  • MPSGE: depends on Complementarity. Hacks at Complementarity's expression system to print things nicely: https://github.com/anthofflab/MPSGE.jl/blob/83419a7060218b0255e7f8f8d372d407ba306c6b/src/algebraic_wrapper.jl#L96. A nicer fix of Complementarity could simplify this

  • NLPModelsJuMP: need constraint lower/upper bounds. Would be easy if you could call index.(all_nonlinear_constraints(model)) and then nonlinear_model(model)[c] because this would return the expression/set form of the constraint. But they also do this highly questionable thing of copying the registered functions and they touch .user_output_buffer for some reason: https://github.com/JuliaSmoothOptimizers/NLPModelsJuMP.jl/blob/2cbf7717c29ee7fc34d1f5a83f017d2cc6cb6058/src/utils.jl#L387-L409. Probably worth taking a much larger look at how we could turn a JuMP model into an NLPModel, because it shouldn't require that sort of thing.

  • DisjunctiveProgramminng: modifies the expression of nonlinear constraints. Exposing access to the documented expression graph in MOI.Nonlinear is probably enough, but it might always be a little caveat utilitor.

  • BilevelJuMP is just some plumbing stuff of new MOI.Nonlinear.parse_expression overloads.

Action items:

  • Implement nonlinear_model to avoid people writing model.nlp_model.
  • Implement index to return indices in MOI.Nonlinear given a nonlinear expression/constraint/parameter index from JuMP
  • Add delete(::Model, ::NonlinearConstraintRef) Might be best as a separate PR

julia> index(cons1)
NonlinearConstraintIndex(1)
MathOptInterface.Nonlinear.ConstraintIndex(1)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This part has changed, but I've added const NonlinearConstraintIndex = MOI.Nonlinear.ConstraintIndex so it isn't breaking.

@davidanthoff
Copy link
Contributor

Cross posting a message I just left on discourse:

I understand that technical making this a minor release is correct, but on the other hand my sense is that there are packages out there that depend on JuMP and that rely on internals that will break if this gets released as 1.2, right? Complementarity.jl is one that comes to mind, not sure whether there are others as well?

In my mind, in such a situation it would be more helpful for users to just make this a 2.0 release. If I’m a user of a package that depends on JuMP internals, I a) might not even be aware of that, and b) can’t really do anything about it. In the end the user experience will just be that things don’t work, and I think it would be nicer to try to prevent that.

Plus, I’m not aware how a depending package like Complementarity.jl could even signal that it doesn’t work with JuMP 1.2, but it does work with 1.1? Maybe I’m missing something there…

Would there be any real downside of just making this a 2.0?

And also: awesome work, very excited to try it!

@davidanthoff
Copy link
Contributor

Also, another question: I remember that there was discussion to move the functionality of Complementarity into JuMP eventually, and I think the idea was to first revamp the NLP stuff and then do that. Just wondering whether the current redesign is such that it would facilitate such a step?

@matbesancon
Copy link
Contributor

I understand that technical making this a minor release is correct, but on the other hand my sense is that there are packages out there that depend on JuMP and that rely on internals that will break if this gets released as 1.2, right?

This is the same as for Julia itself, relying on internals and unexported modules makes it possible that future versions will break one's code.

Plus, I’m not aware how a depending package like Complementarity.jl could even signal that it doesn’t work with JuMP 1.2, but it does work with 1.1?

Yes Pkg does that with Tilde or equality specifies https://pkgdocs.julialang.org/v1/compatibility/#Tilde-specifiers

In my mind, in such a situation it would be more helpful for users to just make this a 2.0 release. If I’m a user of a package that depends on JuMP internals, I a) might not even be aware of that, and b) can’t really do anything about it. In the end the user experience will just be that things don’t work, and I think it would be nicer to try to prevent that.

This is a very valid point, that's why @odow opened PRs or at least issues on all packages depending on the nonlinear part and that will break with the upgrade, similar to what Julia does when something is "theoretically" not breaking but practically is because of reliance on internals or unspecified behavior

@odow
Copy link
Member Author

odow commented Jun 23, 2022

Resolution from today's monthly call is that before merging, we should:

  • Wait at least a week for more feedback
  • Update affected packages compat bounds and modify retroactive bounds in the General registry
  • Add a warning with example to extensions docs on how to set compat bounds if you use internal APIs

@odow
Copy link
Member Author

odow commented Jun 23, 2022

I remember that there was discussion to move the functionality of Complementarity into JuMP eventually, and I think the idea was to first revamp the NLP stuff and then do that. Just wondering whether the current redesign is such that it would facilitate such a step?

No progress on this yet. The current change is very much a maintenance, documentation, and paying down technical debt change. There's no real new functionality. The nonlinear complementarity is something I'm thinking about though.

@odow
Copy link
Member Author

odow commented Jun 24, 2022

Opened PRs to fix compat bounds. EAGO, Complementarity, and MPSGE haven't released tagged versions that are compatible with JuMP 1.0 yet.

@odow
Copy link
Member Author

odow commented Jun 30, 2022

Hmm. @abelsiqueira found an unintentional breaking change we obviously don't have any tests for: https://github.com/JuliaSmoothOptimizers/NLPModelsJuMP.jl/pull/118/files#r911475220

julia> using JuMP

julia> model = Model()
A JuMP Model
Feasibility problem with:
Variables: 0
Model mode: AUTOMATIC
CachingOptimizer state: NO_OPTIMIZER
Solver name: No optimizer attached.

julia> @variable(model, x)
x

julia> @NLconstraint(model, sin(x)  1)
ERROR: Unrecognized function "" used in nonlinear expression.

You must register it as a user-defined function before building
the model. For example, replacing `N` with the appropriate number
of arguments, do:
```julia
model = Model()
register(model, :≤, N, ≤, autodiff=true)
# ... variables and constraints ...
```

Stacktrace:
 [1] error(s::String)
   @ Base ./error.jl:33
 [2] assert_registered(registry::MathOptInterface.Nonlinear.OperatorRegistry, op::Symbol, nargs::Int64)
   @ MathOptInterface.Nonlinear ~/.julia/packages/MathOptInterface/kCmJV/src/Nonlinear/operators.jl:416
 [3] macro expansion
   @ ~/.julia/dev/JuMP/src/macros.jl:1874 [inlined]
 [4] top-level scope
   @ REPL[4]:1

This used to work:

julia> using JuMP
[ Info: Precompiling JuMP [4076af6c-e467-56ae-b986-b466b2749572]

julia> model = Model(); @variable(model, x)
x

julia> @NLconstraint(model, sin(x)  1)
sin(x) - 1.0  0

Fix and tests incoming.

@odow
Copy link
Member Author

odow commented Jul 4, 2022

Merging this now, but I'll wait a while longer before tagging v1.2.0 so we can have a go at updating some of the extensions.

@odow odow merged commit 046d1dc into master Jul 4, 2022
@odow odow deleted the od/moi-nonlinear branch July 4, 2022 22:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Category: Nonlinear Related to nonlinear programming

Development

Successfully merging this pull request may close these issues.

8 participants