-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix bad interaction between volumetric locking correction and xfem #8691
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me.
...yeah, looks good except that I apparently broke more tests than I thought I did! I'll sort these tests out. |
f09dac8
to
15d2ef5
Compare
Job View the site at http://mooseframework.org/docs/PRs/8691/site |
15d2ef5
to
be9b244
Compare
Job View the site at http://mooseframework.org/docs/PRs/8691/site |
These tests started diffing with the minor code change to get the element volume differently. I verified that the volume change is almost zero. These tests are inherently problematic. I modified the parameters of the rate_dep_smear_crack test to make it more stable. The cracking_exponential test had a minor diff, and I re-golded the file.
These tests are fairly large, and thus fairly sensitive to small changes to the mecahnics models. I verified that there were extremely small diffs in the computed volumes. I made some modifications to these tests to make them less sensitive to such changes in the future.
These tests diffed with the change to volumetric locking becuase they were ill-posed. There was an initial gap between the two blocks, and contact gets enforced at the point where the two blocks come in contact during any iterations. The location of that point can be very sensitive to small changes in the behavior of the mechanics models, as happened here. I made a separate mesh for these models that has no initial gap to make these more robust in the future.
Because the volume of partial elements is computed differently, these models are expected to have changed.
After my recent change to the volume calculation, these test started failing because of the SuperLU issue that is affecting other tests. We have skipped a number of others already. Add these to the list.
This test had a subtle difference on one of the platforms because one of the nodes didn't release. Make the time steps bigger to make the behavior more deterministic.
be9b244
to
d4fb340
Compare
Job View the site at http://mooseframework.org/docs/PRs/8691/site |
@bwspenc - This commit is causing regular random failures on the Intel target now. |
@permcody Is causing failures in contact tests that fail on PETSc 3.7.4? Daniel mentioned that he had a commit that failed once due to that. I know that this commit did make some of those issues pop up in new places, but it's just new manifestations of the known issue that the SuperLU in PETSc 3.7.4 is broken. |
I don't know but it sometimes says that it can't find a contact node and other times it works. It only started with this commit. I'd say it fails 60-70% of the time. |
Yes, that's the PETSC 3.7.4 issue. We need to change that error message to say that the solution vector is full of nans rather than that it can't find the contact node. The tests pass on the machines that have the older, non-broken version of PETSc, so if you're lucky and get one of those machines, they'll go through. I can add more of those tests to be skipped, but we have a real problem with this version of PETSc. It sounds like we're having better success with 3.7.5 plus another option that Fande is passing in to SuperLU, but it's still not a 100% solution. |
closes #8636
Note that this commit makes a few minor code changes: replacing calls to _current_elem->volume() with _current_elem_volume to correctly account for partial element volumes when used with XFEM. It made a number of tests diff, and I went through the tests and verified that there were only extremely small differences between these two volumes in all cases. These tests were all either ill-posed or overly sensitive, so hopefully the changes I made to these tests will make them more robust in the future.