New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Getting tensor mechanics ready for parallel #13364
Conversation
2568bf1
to
64c380c
Compare
64c380c
to
6fc33b8
Compare
FYI @permcody |
@@ -36,6 +36,7 @@ | |||
input = 'crysp_cutback.i' | |||
exodiff = 'crysp_cutback_out.e' | |||
allow_warnings = true | |||
max_parallel = 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All of the tests in tensor_mechanics/test/tests/crystal_plasticity
seem to be flagged for deprecation, but FiniteStrainCrystalPlasticity
has not been deprecated. This is the only test that fails, so I just restrict it to be non-parallel only. I'm not sure if @permcody or @dschwen plan on yanking these out soon or not?
Job Documentation on 5e60037 wanted to post the following: View the site here This comment will be updated on new commits. |
@tophmatthews - If you plan to switch the pc_type to |
@milljm - Do we have any good way of running a version of PETSc without SuperLU? In Lieu of that, we could potentially add an extra PETSc config check to make sure that SuperLU is available when running in parallel.... That actually sounds a lot easier. @fdkong - Can you confirm the validity of my statement above? |
@permcody we do not... All our scripts add that configuration switch. Looks like we need to provide some TLC to our scripts/update_and_rebuild_petsc.sh script. eg, allow a way to disable/add a configuration much like we would do with
for example. |
When getting any module parallel things in, it is better to test against |
I'm not sure |
Also, I found some tests failed depending on processor numbers, ie sometimes passed in -p2, but failed in -p3 or -p4. Also, the heavy switch brought out some other failures. |
I'm phase field I'm seeing that I have to tighten tolerances quite a bit to
make parallel tests pass with asm.
…On Tue, May 7, 2019, 11:37 AM Topher Matthews ***@***.***> wrote:
When getting any module parallel things in, it is better to test against petsc-alt
as well.
Also, I found some tests failed depending on processor numbers, ie
sometimes passed in -p2, but failed in -p3 or -p4.
Also, the heavy switch brought out some other failures.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#13364 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AABRMPT5E3QIRIECMEVTBLTPUG45BANCNFSM4HLD77KQ>
.
|
Tolerances work for most of the failures, but then it becomes a balancing act to get convergence in serial, I couldn't find an appropriate balance with just tolerances. |
@permcody can you add TM to Partial Modules Parallel Test (In progress)? |
TM is added. I'm ok with adding lu as a preconditioner. I just want to make sure that we don't pick up regressions for PETSc builds that don't have SuperLU or Mumps installed. The only robust way for me to do this is to add a target that actually tests that configuration. We'll be working on that! |
Removed lu where possible, and added |
Are you regolding with tighter tolerances? |
I was trying to avoid regolding? I didn't have to for some with tighter tolerances, and lu worked for the others |
Also, from looking at the exodiff, regolding wouldn't have worked, at least for one or two cases I was looking at. |
UGGGG SQA! Well if that's not motivation to not touch the |
2704db9
to
21d1151
Compare
21d1151
to
5e60037
Compare
ref #2975