Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using TrilinosWrappers::SolverCG in inverse_operator() #12754

Closed
lwy5515 opened this issue Sep 11, 2021 · 9 comments · Fixed by #12755
Closed

Using TrilinosWrappers::SolverCG in inverse_operator() #12754

lwy5515 opened this issue Sep 11, 2021 · 9 comments · Fixed by #12755

Comments

@lwy5515
Copy link
Contributor

lwy5515 commented Sep 11, 2021

Hi,

I am trying to solve the linear system in step-60 but using distributed triangulations. Following the routine solve() in step-60, I define an inverse operator for the stiffness matrix using the CG solver from TrilinosWrappers::SolverCG. The code compiles successfully but a dealii exception is thrown out at the evaluation of the inverse operator with the following additional information:

Uninitialized TrilinosPayload::inv_vmult called (Matrix constructor with matrix exemplar)

The above can be recovered by modifying solve() in step-40 when using Trilinos: replace the following

solver.solve(system_matrix,
completely_distributed_solution,
system_rhs,
preconditioner);

with

auto K       = linear_operator<LA::MPI::Vector>(system_matrix);
auto preconK = linear_operator(K, preconditioner);
auto K_inv   = inverse_operator(K, solver, preconK);

completely_distributed_solution = K_inv * system_rhs;

and add the headers

#include <deal.II/lac/linear_operator.h>
#include <deal.II/lac/linear_operator_tools.h>

Then I get the following error message:

Running` with Trilinos on 1 MPI rank(s)...
Cycle 0:
   Number of active cells:       1024
   Number of degrees of freedom: 4225

--------------------------------------------------------
An error occurred in line <2458> of file </var/folders/8z/hlb6vc015qjggytkxn84m6_c0000gn/T/heltai/spack-stage/spack-stage-dealii-9.3.0-zy7k3uwnakcqjvrajvacy5l4jrl7eaex/spack-src/source/lac/trilinos_sparse_matrix.cc> in function
    auto dealii::TrilinosWrappers::internal::LinearOperatorImplementation::TrilinosPayload::TrilinosPayload(const TrilinosWrappers::SparseMatrix &, const TrilinosWrappers::SparseMatrix &)::(anonymous class)::operator()(dealii::TrilinosWrappers::internal::LinearOperatorImplementation::TrilinosPayload::Domain &, const dealii::TrilinosWrappers::internal::LinearOperatorImplementation::TrilinosPayload::Range &) const
The violated condition was: 
    false
Additional information: 
    Uninitialized TrilinosPayload::inv_vmult called (Matrix constructor
    with matrix exemplar)

Stacktrace:
-----------
#0  2   libdeal_II.g.9.3.0.dylib            0x000000010d30c91f _ZZN6dealii16TrilinosWrappers8internal28LinearOperatorImplementation15TrilinosPayloadC1ERKNS0_12SparseMatrixES6_ENK3$_6clER18Epetra_MultiVectorRKS8_ + 95: 2   libdeal_II.g.9.3.0.dylib            0x000000010d30c91f _ZZN6dealii16TrilinosWrappers8internal28LinearOperatorImplementation15TrilinosPayloadC1ERKNS0_12SparseMatrixES6_ENK3$_6clER18Epetra_MultiVectorRKS8_ 
#1  3   libdeal_II.g.9.3.0.dylib            0x000000010d30c8b9 _ZNSt3__1L8__invokeIRZN6dealii16TrilinosWrappers8internal28LinearOperatorImplementation15TrilinosPayloadC1ERKNS2_12SparseMatrixES8_E3$_6JR18Epetra_MultiVectorRKSB_EEEDTclclsr3std3__1E7forwardIT_Efp_Espclsr3std3__1E7forwardIT0_Efp0_EEEOSF_DpOSG_ + 9: 3   libdeal_II.g.9.3.0.dylib            0x000000010d30c8b9 _ZNSt3__1L8__invokeIRZN6dealii16TrilinosWrappers8internal28LinearOperatorImplementation15TrilinosPayloadC1ERKNS2_12SparseMatrixES8_E3$_6JR18Epetra_MultiVectorRKSB_EEEDTclclsr3std3__1E7forwardIT_Efp_Espclsr3std3__1E7forwardIT0_Efp0_EEEOSF_DpOSG_ 

The above issue can be avoided using the native CG solver, i.e. dealii::SolverCG<LA::MPI::Vector>.

Should this be a bug? or somewhere I did incorrectly?

FYI @luca-heltai

@jppelteret
Copy link
Member

I think that this is an error with your code (due to our current lack of documentation), and that you need to change auto preconK = linear_operator(K, preconditioner); to auto preconK = linear_operator<LA::MPI::Vector>(system_matrix, preconditioner); or maybe just put the preconditioner directly into the inverse_operator. I'll see if I can dig up a concrete example of the former approach in the test suite, but you can see #12466 for comments that I made that will be transcribed into the documentation.

@lwy5515
Copy link
Contributor Author

lwy5515 commented Sep 12, 2021

Thank you so much for the clarification.

@jppelteret
Copy link
Member

Perhaps you can give either of those (or both) modifications, and let us know if that fixes the issue :-)

@lwy5515
Copy link
Contributor Author

lwy5515 commented Sep 12, 2021

@jppelteret
Both suggestions fix the issue. I also test with

auto K = linear_operator<LA::MPI::Vector>(system_matrix);
auto preconK = linear_operator<LA::MPI::Vector>(K, preconditioner);

It seems that this does not work (with the same error message). So basically I should set up preconK with system_matrix instead of the operator K. But I am a bit confused since in step-70

const auto A = linear_operator<LA::MPI::Vector>(system_matrix.block(0, 0));
const auto amgA = linear_operator(A, prec_A);

Any suggenstions?

@luca-heltai
Copy link
Member

luca-heltai commented Sep 12, 2021 via email

@jppelteret
Copy link
Member

I guess that why step-70 works is that the amgA is never used within an inverse_operator and, as Luca implied, doesn't setup or use the inverse payload.

Ok, having looked through some of the code I now agree that this must be a bug and that it's a code path that hasn't been tested yet. It looks to me that we have everything in place with the expectation that this should work (we permit LinearOperator exemplars for both linear_operator and inverse_operator, and the call to inverse_operator always tries to set up the inverse_payload, so there must be something subtly wrong). @lwy5515 may you please set up a MWE that I can use as the basis of test (maybe with one of the working solutions in place, and the failing approach commented out below). If I have something that I can simply drop in to the test suite and get going with, then that would be very helpful. There's already the test tests/mpi/step-40.cc, but its been modified to use PETSc only. Maybe it would be easy enough to convert to a Trilinos variant?

@jppelteret
Copy link
Member

Actually, I think that I have some idea what the problem is could be. The methods that set up the inverse_payload for Trilinos matrices are

None of those are specialised for a type Preconditioner == LinearOperator. The weird thing is that I cannot see why this does not render a compiler error, as I'm sure that the Payload in https://github.com/dealii/dealii/blob/master/include/deal.II/lac/linear_operator.h#L705-L706 cannot be interpreted as an EmptyPayload? Maybe if you had done this

auto K       = linear_operator<LA::MPI::Vector>(system_matrix);
auto preconK = linear_operator<LA::MPI::Vector>(K, preconditioner); // Specify vector type

then you see some different behaviour?

@lwy5515
Copy link
Contributor Author

lwy5515 commented Sep 12, 2021

then you see some different behaviour?

Well. I got the same error message.

@lwy5515 may you please set up a MWE that I can use as the basis of test

Here is a test based on step-40: step-40.cc.zip

@jppelteret
Copy link
Member

Thanks for the test. I managed to simplify it further, mimicking what was done in the other step-40 based tests in the test suite. I've also opened a PR that fixes this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants