Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PETScWrappers::MatrixBase implement clear_rows_columns #14684

Merged

Conversation

stefanozampini
Copy link
Contributor

Add missing support for zeroing rows and columns of a PETSc matrix.

Note that it is still not possible with block operators. That will require PETSc support in MATNEST

Copy link
Member

@bangerth bangerth left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very nice! I love that you bring all of that knowledge of what PETSc can do these days to these interfaces!

There is a comment in matrix_tools.h that relates to the function you change:

  /**
   * Apply Dirichlet boundary conditions to the system matrix and vectors as
   * described in the general documentation of this namespace. This function
   * works on the classes that are used to wrap PETSc objects.
   *
   * <b>Important:</b> This function is not very efficient: it needs to
   * alternatingly read and write into the matrix, a situation that PETSc does
   * not handle well. In addition, we only get rid of rows corresponding to
   * boundary nodes, but the corresponding case of deleting the respective
   * columns (i.e. if @p eliminate_columns is @p true) is not presently
   * implemented, and probably will never because it is too expensive without
   * direct access to the PETSc data structures. (This leads to the situation
   * where the action indicated by the default value of the last argument is
   * actually not implemented; that argument has <code>true</code> as its
   * default value to stay consistent with the other functions of same name in
   * this namespace.)
   *
   * This function is used in step-17 and step-18.
   *
   * @note If the matrix is stored in parallel across multiple processors
   * using MPI, this function only touches rows that are locally stored and
   * simply ignores all other rows. In other words, each processor is
   * responsible for its own rows, and the @p boundary_values argument needs
   * to contain all locally owned rows of the matrix that you want to have
   * treated. (But it can also contain entries for degrees of freedom not
   * owned locally; these will simply be ignored.) Further, in the context of
   * parallel computations, you will get into trouble if you treat a row while
   * other processors still have pending writes or additions into the same
   * row. In other words, if another processor still wants to add something to
   * an element of a row and you call this function to zero out the row, then
   * the next time you call compress() may add the remote value to the zero
   * you just created. Consequently, you will want to call compress() after
   * you made the last modifications to a matrix and before starting to clear
   * rows.
   */
  void
  apply_boundary_values(
    const std::map<types::global_dof_index, PetscScalar> &boundary_values,
    PETScWrappers::MatrixBase &                           matrix,
    PETScWrappers::VectorBase &                           solution,
    PETScWrappers::VectorBase &                           right_hand_side,
    const bool eliminate_columns = true);

Is this still current after your changes?

include/deal.II/lac/petsc_matrix_base.h Outdated Show resolved Hide resolved
include/deal.II/lac/petsc_matrix_base.h Outdated Show resolved Hide resolved
source/lac/petsc_matrix_base.cc Show resolved Hide resolved
stefanozampini and others added 2 commits January 17, 2023 08:04
Co-authored-by: Wolfgang Bangerth <bangerth@colostate.edu>
@stefanozampini
Copy link
Contributor Author

@bangerth I have implemented your suggestions. This is ready to go

@bangerth
Copy link
Member

/rebuild

@tamiko tamiko merged commit a7dc532 into dealii:master Jan 18, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants