Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Automatically force full matrix when using AD with global indexing #16395

Closed
lindsayad opened this issue Dec 5, 2020 · 0 comments · Fixed by #16397
Closed

Automatically force full matrix when using AD with global indexing #16395

lindsayad opened this issue Dec 5, 2020 · 0 comments · Fixed by #16397
Assignees
Labels
C: Automatic Differentiation Tickets pertaining the MetaPhysicL based forward mode AD system C: Framework P: normal A defect affecting operation with a low possibility of significantly affects. T: task An enhancement to the software.

Comments

@lindsayad
Copy link
Member

lindsayad commented Dec 5, 2020

Reason

AD with global indexing adds to the Jacobian for any non-zero dependence. It would be difficult to prevent additions to the matrix if the user has not specified coupling between variables which exists in practice; we would have to have query two maps: a map from the i index to variable number, and a map from the j index to the variable number. This could induce undesirable expense in most cases. Consequently, we will allow AD with global indexing to add any non-zero coupling that exists. With that in mind we want to, by default, assume a full coupling sparsity pattern whenever we have AD objects and we are using global indexing. By assuming a full coupling sparsity pattern, we will prevent new nonzero allocations in PETSc that can greatly slow down simulations.

Design

Automatic triggering of full coupling has to happen early in the simulation, ideally before EquationSystems::init or else we will have to reinit the EquationSystems later, which is typically a fairly expensive operation. Unfortunately AD objects are typically not added until well after EquationSystems::init. To circumvent this, I will leverage our RelationshipManager/GhostingFunctor system since GhostingFunctors are designed to be used to as coupling functors for determination of the sparsity pattern/coupling.

Additionally, I think it makes sense to allow the user to turn off the full matrix if they trust the sparsity pattern they're providing in their input file. This could be very important if there are a lot of variables, and the coupling between these variables is sparse. In that case, preallocating a fully sparsity pattern would cause an unnecessary initial memory spike (although later assembly of the matrix would shrink out the unnecessary memory).

Impact

In NavierStokes for example, we have erroring on new nonzero allocations turned on by default. This resulted in @smharper encountering a PETSc message that he didn't understand (see #15644 (comment) and #15644 (comment)); almost all of our users would be equally if not more confused. We want to avoid that. So just like we did in #13411 we'll automatically construct a full matrix (by default) behind the scenes when using AD with global indexing whether we are doing NEWTON or PJFNK.

@lindsayad lindsayad added T: task An enhancement to the software. P: normal A defect affecting operation with a low possibility of significantly affects. labels Dec 5, 2020
@lindsayad lindsayad added the C: Automatic Differentiation Tickets pertaining the MetaPhysicL based forward mode AD system label Dec 5, 2020
@lindsayad lindsayad self-assigned this Dec 5, 2020
FY 21 NEAMS Multiphysics Technical Assistance Support automation moved this from To do to Done Jan 4, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C: Automatic Differentiation Tickets pertaining the MetaPhysicL based forward mode AD system C: Framework P: normal A defect affecting operation with a low possibility of significantly affects. T: task An enhancement to the software.
Projects
No open projects
Development

Successfully merging a pull request may close this issue.

1 participant