Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor the PETSc matrix/vector classes #5454

Closed
3 of 8 tasks
drwells opened this issue Nov 13, 2017 · 9 comments
Closed
3 of 8 tasks

Refactor the PETSc matrix/vector classes #5454

drwells opened this issue Nov 13, 2017 · 9 comments

Comments

@drwells
Copy link
Member

drwells commented Nov 13, 2017

#5453 reminded me that I have been meaning to clean things up in this part of the library. We currently have several open issues for the PETSc bindings:

I propose that I do the following to address these:

  • Deprecate PETScWrappers::SparseMatrix (for the same reasons we deprecated the serial vector class)
  • Implement a new class PETScWrappers::VectorView based on the wrapper mode currently in VectorBase. This might should inherit from ReadWriteVector and LinearAlgebraVector and should conform to the new interface specified by Future interfaces to vectors from other libraries #745. I imagine this class as a bridge between Vec and our own new vector classes, while not necessarily owning the Vec.
  • Similarly, rename MatrixBase to MatrixView and inherit from it for the three remaining PETScWrappers matrix classes (which are now owning views)
  • Remove the VectorBase class (since only one class now inherits from it)
  • All PETSc classes and functions should be declared in the PETScWrappers; i.e., PETScWrappers::MPI should be removed since everything is intended to work with MPI-compatible data structures (with, of course, exceptions like PreconditionLU that are due to missing algorithms in PETSc)

This is a lot of work but I think its good to have a road map for the future direction of this part of the library.

@tjhei
Copy link
Member

tjhei commented Nov 13, 2017

This might inherit from ReadWriteVector and LinearAlgebraVector and should conform to the new interface

The whole point of the new system is that we need to import/export to/from a petsc vector and a serial ReadWriteVector. I don't think it is necessary to inherit from ReadWriteVector. What @Rombur did for Trilinos is creating new vectors in a different namespace. I think that is what is appropriate here too.

edit: The problem is that PETSc vectors require locking/unlocking for each element access, so we can not treat it as a block of memory and a linear algebra vectors, at least not in a multithreaded contest. @Rombur 's design fixes that issue but only if you separate the two concepts.

@bangerth
Copy link
Member

Yes, the idea is that there is only one or two (deal.II) classes that implement the ReadWriteVector interface, and that all other vector types are VectorSpaceVector-type classes that import/export their elements via ReadWriteVector. We specifically do not want to allow individual element access for PETSc and Trilinos vectors.

@drwells
Copy link
Member Author

drwells commented Nov 13, 2017

The discussion in August is slowly coming back to mind. You're both right of course. We already ban using multiple threads with PETSc so this won't be an issue.

I'll update my list.

@drwells
Copy link
Member Author

drwells commented Nov 13, 2017

What @Rombur did for Trilinos is creating new vectors in a different namespace. I think that is what is appropriate here too.

We can do three things at once, then: if we put the new vector class in PETScWrappers and declare it in lac/petsc_vector.h then we solve the renaming problem and have a new class with a new name.

@tjhei
Copy link
Member

tjhei commented Nov 14, 2017

We already ban using multiple threads with PETSc so this won't be an issue.

Well, if we go forward with the plan, we no longer need to limit PETSc computations to one thread because multithreaded read/write access will only happen to ReadWriteVectors and not to PETSc vectors.

@drwells
Copy link
Member Author

drwells commented Nov 14, 2017

I think that PETSc still keeps track of things at a global level, so one cannot create PETSc objects simultaneously across multiple threads without getting a race condition. I'll look into this.

@tjhei
Copy link
Member

tjhei commented Nov 15, 2017

I think that PETSc still keeps track of things at a global level, so one cannot create PETSc objects simultaneously across multiple threads without getting a race condition. I'll look into this.

That might be true (likely). I think we never spell out what kind of thread-safety expectations we have for particular classes. All our multithreaded code only parallelizes work done over cells (or internal vector operations that are safe like Vector::add()). We also serialize all write access to vectors and matrices inside workstream. So the only problem is reading from multiple threads from vectors (for example when computing residuals/errors). This is why we disable multithreading when using PETSc. But all of those cases should be fixed by switching to ReadWriteVectors. In short: I think we should be "fine" with the proposed changes.

@drwells
Copy link
Member Author

drwells commented Nov 15, 2017

I agree. Creating multiple subscriptions is probably not thread-safe.

PETSc also supports compilation in a 'thread-safe' mode which I think prevents these race conditions.

@drwells
Copy link
Member Author

drwells commented Jul 16, 2023

I think we've made enough progress on improving our PETSc wrappers in several new ways that we can close this as completed / out-of-date.

@drwells drwells closed this as completed Jul 16, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants