-
Notifications
You must be signed in to change notification settings - Fork 473
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Parallel low-order refined classes for spaces of vectors #3167
Comments
The LOR classes are not limited to Hypre solvers, you can use any solver that works with a The problem you are running into is that in parallel, the LOR discretization will give a LORSolver<HypreSmoother> gs(a, ess_dofs);
gs.GetSolver().SetType(HypreSmoother::GS); Note that Gauss-Seidel is not usually very suitable for parallel runs, so you might want to use a different To address your second question, I don't know immediately if the LOR solvers will work well for elasticity problems. LOR is known to work well for many elliptic problems, but it depends on the PDE you are solving. However, if the LOR solvers work properly in serial, then in parallel it should also be OK. Maybe you can post the source code so we can reproduce locally. |
Thank you for the prompt response. Yes, I was able to use the HypreSmoother options. Here is the source code if you would like to have a look. :-) |
I think it's because the LOR class wasn't matching the ordering of the vdim, and in parallel it was using a different ordering. Does #3177 fix it for you? |
I tried the fix. The LOR preconditioning in parallel performs better than without providing one for the most part. However, I am faced with an issue when the problem is ran with polynomial order 4 or higher. It exits with the following error message: MFEM abort: (r,c,f) = (421,427,973) Abort(1) on node 1 (rank 1 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 1 I was able to reproduce the same error both on my local machine and on the cluster. Here is the source code and the mesh file used to run the problem using the command |
Your code seems to work for me with order 4 and the given mesh. Maybe try pulling from master, make clean, rebuilding in debug mode, etc. and if it still fails then post the backtrace. It looks like your mesh is a tet mesh. LOR preconditioners will work much better with all-hex meshes (they will not really help at all with tet meshes). |
I pulled fresh from the master and built mfem in debug mode.
The program only fails under the following conditions: For some reason I was not able to get the backtrace from the code while running on multiple cores. Below is the backtrace from running on a singe core (where there is no assertion failure so I just set a breakpoint at FinalizeParTopo).
Thank you for letting me know that LOR preconditioner is not best suited for tet meshes |
@karthichockalingam -- I edited your post above for clarity, can you please check it? |
Thank you @tzanio - it reads fine. I wasn't exactly sure how to format. |
Hello,
I used the below fix
#3153
to run problems for linear elasticity (ex2.cpp) preconditioned. When running LOR serially - it works and converges.
In parallel the LOR classes seem (type) limited to Hypre solvers. The
LORSolver<HypreBoomerAMG>
compiles and runs, whereasLORSolver<GSSmoother>
at runtime errors out with the following messageAssembling: r.h.s. ... matrix ...
SparseSmoother::SetOperator : not a SparseMatrix!.
But unfortunately the elasticity problem with
LORSolver<HypreBoomerAMG>
severely underperforms, to the point not passing a preconditioner is betterCan you help sort if LOR works in parallel for linear elasticity problems?
Thank you!
The text was updated successfully, but these errors were encountered: