-
-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
What is a Nonlinear Solver? And How to easily build Newer Ones #345
Conversation
7bf098a
to
3f58736
Compare
Codecov ReportAttention:
Additional details and impacted files@@ Coverage Diff @@
## master #345 +/- ##
==========================================
+ Coverage 83.79% 86.23% +2.44%
==========================================
Files 28 44 +16
Lines 2179 2609 +430
==========================================
+ Hits 1826 2250 +424
- Misses 353 359 +6 ☔ View full report in Codecov by Sentry. |
I may have a special case here to consider and will write it here instead of as a separate issue due to the title of the PR. If I should transfer this information to an issue instead, I am happy to do it. When discretizing PDEs we often end up with a nonlinear system of equations + constraints (coming from e.g. Dirichlet boundary conditions). For example in Newton methods we also know how to enforce the constraints efficiently: We can rewrite the linear equation system. In Ferrite.jl a typical Newton-Raphson implementation looks like this: dh = ... # Holds info about finite element problem
un = zeros(ndofs(dh))
u = zeros(ndofs(dh))
Δu = zeros(ndofs(dh))
... # define remaining variables
dbcs = ... # Holds Dirichlet boundary condition information
apply!(un, dbcs) # Apply Dirichlet conditions to solution vector
while true; newton_itr += 1
# Construct the current guess
u .= un .+ Δu
# Compute residual and tangent for current guess
assemble_global!(K, g, u, ...)
# Apply boundary conditions
apply_zero!(K, g, dbcs)
# Compute the residual norm and compare with tolerance
normg = norm(g)
if normg < NEWTON_TOL
break
elseif newton_itr > NEWTON_MAXITER
error("Reached maximum Newton iterations, aborting")
end
# Compute increment
ΔΔu = K \ g
apply_zero!(ΔΔu, dbcs)
Δu .-= ΔΔu
end (see https://ferrite-fem.github.io/Ferrite.jl/dev/tutorials/hyperelasticity/ for full example). I am confident that other frameworks come with similar mechanisms. Basically the difference to a "normal" Newton-Raphson is the With this in mind I would like to ask for considering this and possibly recommendations about how this integration can be accompished from user side when using NonlinearSolve.jl. I am happy to help here and also to answer any questions which might come up. My first idea here was to provide a custom linear solver. This custom solver just wraps the actual linear solver and adds the apply_zero calls in the |
One way might be to provide a The only concern here is that with custom functions, the inputs to the @ChrisRackauckas do you have particular thoughts on this? |
1bf9805
to
edd6dae
Compare
function SciMLBase.reinit!(cache::LeastSquaresOptimJLCache, args...; kwargs...) | ||
error("Reinitialization not supported for LeastSquaresOptimJL.") | ||
end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why only this one but not the other extensions?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the only one that creates the cache. All others directly use __solve
kwargs...) where {F, IN} --> AbstractNonlinearSolveLineSearchCache | ||
``` | ||
""" | ||
abstract type AbstractNonlinearSolveLineSearchAlgorithm end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wasn't this moved to LineSearch.jl?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not yet, but that is the eventual plan. We still need to figure out how to decouple the line searches like RobustNonMonotoneLineSearch
which heavily ties into the cache.
if is_extension_loaded(Val(:Symbolics)) | ||
return SymbolicsSparsityDetection() | ||
else | ||
return ApproximateJacobianSparsity() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As a default?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Symbolics is actually pretty good after a size threshold, and even faster than the approximate version.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean defaulting to approximate Jacobian sparsity seems a bit odd. Even if it's not AD?
A bunch of small comments and issues to open, otherwise good to go! |
the dae failure is real https://github.com/SciML/NonlinearSolve.jl/actions/runs/7543078285/job/20533293590?pr=345#step:6:1759. The cache is initialized with a Vector but later it gets a subarray |
03bc7d4
to
80c8ced
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like my comments were addressed. Let's get some issues open on nonlinear preconditioning, Klement in the polyalg, linesearch removal, and non-scalar tolerances.
We should just make the cache via a view too. |
Make an issue in OrdinaryDiffEq with the DAE one, I can take a look at that later today. |
Goals
Dispatch system with safety to disable NLLS with NewtonRaphson (and similarly for others). Currently it will work which is not the worst thing, but disabling this is a safer option IMOreinit
interfaceHow close is this to being ready?
Once tests pass and downstream PRs are merged we should be good to go. I will follow this PR up with:
recompute_jacobian
to Jacobian only when needed.Documentation
Target Issues
Modular Internals