-
-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
HYPRE.jl solvers and preconditioners #254
Conversation
Codecov Report
@@ Coverage Diff @@
## main #254 +/- ##
==========================================
+ Coverage 65.58% 68.73% +3.15%
==========================================
Files 12 14 +2
Lines 738 838 +100
==========================================
+ Hits 484 576 +92
- Misses 254 262 +8
📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more |
083c5f5
to
ece4472
Compare
hcache = cache.cacheval | ||
if hcache.isfresh_A || hcache.A === nothing | ||
hcache.A = cache.A isa HYPREMatrix ? cache.A : HYPREMatrix(cache.A) | ||
hcache.isfresh_A = false | ||
end | ||
if hcache.isfresh_b || hcache.b === nothing | ||
hcache.b = cache.b isa HYPREVector ? cache.b : HYPREVector(cache.b) | ||
hcache.isfresh_b = false | ||
end | ||
if hcache.isfresh_u || hcache.u === nothing | ||
hcache.u = cache.u isa HYPREVector ? cache.u : HYPREVector(cache.u) | ||
hcache.isfresh_u = false | ||
end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this play nicely with Distributed? So for example, if someone used a DistributedArray
A
and b
, will this nicely use the memory? Copy it again? Distribute the same way? I'm curious because if we could have a default solver choice on DistributedArray values then that will fix a few downstream issues.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right now it will only work if you have either set up your matrix and vector with HYPRE to begin with or if you use a SparseMatrixCSC
or SparseMatrixCSR
. For the latter the matrix will be duplicated inside the hypre library, and some buffers need to be allocated in order to send the data to HYPRE.
I don't think it would work with Distributed or DistributedArray since those are not MPI processes, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense. With Distributed you could make it use the MPI cluster manager, but that's not too common.
I'll merge and format |
Thanks! |
This patch adds support to LinearSolve.jl for HYPRE.jl solvers and preconditioners. This is implemented as a package extension and thus requires Julia version 1.9 or higher.
Fixes #106, fixes #167.