-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable usage of dual shape functions for the Lagrange multipliers #15216
Conversation
742caa2
to
c6b76ff
Compare
c6b76ff
to
3eca7e2
Compare
Hi @bwspenc, would you mind helping me review this PR? |
framework/src/base/Assembly.C
Outdated
FEShapeData * fesd = _fe_shape_data_dual_lower[fe_type]; | ||
|
||
fe_lower->set_calculate_dual(true); | ||
fe_lower->reinit(elem, pts, weights); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@lindsayad here is where the dualLowerD elems are re-initialized with specific pts. dual_coeff
is not computed in this case thus causes a seg fault...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See my comment libMesh/libmesh#2552 (comment). Is it valid to calculate the dual coefficients with points that don't correspond to a quadrature rule for that element? Is the correct solution to call fe_lower->reinit(elem)
first before calling fe_lower->reinit(elem, pts, weights)
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reinitLowerDElemRef
and reinitLowerDElemDualRef
are supposed to compute the quadrature rule from the customized integration points for the slave side. Apologies for the earlier confusion.. I also just realized that the dual_coeff
should be computed using the same integration pts as the primal. So it is the "full" element for general cases, and becomes complicated for the slave elements when we are integrating using projected points (custom_xi1_pts
). And I guess the local matrices totally depend on the 'correctness' of the customized integration points in order to be non-singular..
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So what are you proposing? One can easily imagine (just thinking about the linear Lagrange case for simplicity) that you could have a mortar elem which occupies a very tiny fraction of the total slave element "volume", e.g. where the custom_xi1_pts
are clustered on the farthest LHS or RHS of the slave element (or in the middle, or whatever).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My question to you is: is the dual method valid for any set of "integration" points, putting aside "practical" concerns like whether the biorthogonality condition yields a non-singular matrix?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Theoretically it should be valid for the set of integration points that are the same of the primal element. I suppose this is a better way to put it..
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is "primal element" language that people use? What is a "primal element"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good question. The "primal and dual" concept is something people often use. The "dual basis" is relative to its "primal basis". Here I think it is more appropriate to say the "element that uses the primal basis" and the "element that uses the dual basis"
3eca7e2
to
b3a9dbe
Compare
Hi @lindsayad, it seems libmesh is not updated for the MOOSE checks here.. do you know when this will be updated, and who would be the best person to talk to about this? |
I am the best person to talk to. See #15182 |
Great! So how frequent will the checks be based on an updated libmesh? (I assume that is when I will be able to pass the checks here, right?) |
you'll have to wait until the libmesh update is merged and until it passes
testing on next and is "merged" into devel. Alternatively as soon as the
update is merged into next, you could rebase your PR onto next
…On Wed, May 13, 2020 at 12:51 PM dewenyushu ***@***.***> wrote:
I am the best person to talk to. See #15182
<#15182>
Great! So how frequent will the checks be based on an updated libmesh? (I
assume that is when I will be able to pass the checks here, right?)
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#15216 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACOGA4GFVDQP2WTGZPMHQVDRRL235ANCNFSM4MZDFBEA>
.
|
b3a9dbe
to
d10f8fc
Compare
Job Documentation on be92a2c wanted to post the following: View the site here This comment will be updated on new commits. |
Hi @lindsayad, would you mind taking a look at the failing parallel test? I am not sure why the solver fails using 2cores... It does not help by changing to other solvers, like |
Lol I can't get this to fail locally. Are you using conda libmesh? Oh I was in the wrong directory lol. non-conforming one looks great! |
Well if I turn on @jwpeterson off the top of your head do you see any reason why mortar with a perfect conforming mesh would produce a singular matrix? These tests are just doing solution continuity. @fdkong I don't suppose you can immediately see why this matrix is singular can you?
|
If I make the primal variable |
Hmm... I don't think I ever tested a fully "conforming mesh" case, but I should have. I did have a few meshes where some of the nodes just happened to align, in order to make sure that was working OK. Recall that for the non-dual, equal-order Lagrange multiplier formulation, a sufficient condition for LBB-stability is: h_lambda >= C * h_u with C > 1, where h_lambda and h_u are the Lagrange multiplier and primal variable mesh sizes, respectively. In the conforming mesh case you are right at the C=1 limit so there's no guarantee it would produce stable solutions, but I'm also a bit surprised that it would produce a singular matrix. In the dual Lagrange multiplier formulation, are you "condensing" out the DOFs for the LM variable and still getting a singular matrix? Or does the current solver retain all those DOFs like in the non-dual LM formulation? |
I am using my own libmesh. Yeah, it is also confusing the non-conforming case works fine! |
@lindsayad I didn't look at the mesh used for this specific problem but if it's still singular for say a 2x2 conforming mesh of Quad4's with the interface through the center, it might be easier to see the linear combinations and/or visualize the null space.
@dewenyushu are you saying that the test works OK in serial but not parallel? @lindsayad mentioned using |
Thanks for the explanation! I have been having the question of the stability of the schemes while not been able to find a good answer.
No I am yet to condense out the LM DOFs. I am starting to work on the "condensing" in another PR. I doubt if the condensing process would change the singularity of the matrix. I will have a try! |
Yes, the solver retains all DOFs |
It's a surprisingly complicated issue. There is a Tech. Report which has some basic information in Sections 3 and 4 on stability and error estimates for the non-dual method that you may find interesting. |
Yes, the test works fine in serial on my local machine. But I am using |
But I do see 2 singular values for both serial and parallel cases. Same as Alex reported |
Hmm... according to this it looks like But I think what we are really interested in is the output of |
Correct. The conforming case runs "fine" in serial but it's still singular. Just getting lucky.
This is why I tried second order for the primal variable. All the tests I added originally for mortar used a mixed order discretization for saddle point problems. But yea I also didn't expect the theorized instability to necessarily translate to a singular matrix. And the non-conforming case with equal-order is not singular (in the "non-conforming" case the interface end-points are perfectly aligned, but those are the only points of alignment; it's your old 2 element on one side, 3 elements on another mesh). The 2x2 mesh suggestion is a good one. I'll give that shot and report back. |
Yes, I agree. I think @lindsayad and I both observe the same performance now for this test
|
Ok I still can't see the linear combinations in this guy, but I know what's happening
This is two elements on the left, two on the right. Two singular values. The singularity doesn't have anything to do with mortar per se. The problem is that at interface end points we are simultaneously specifying Dirichlet boundary conditions and solution continuity through a Lagrange Multiplier. So we have something like these equations: u0_slave = f0_slave (Dirichlet) u1_slave = f1_slave (Dirichlet) There ya go, two singularities. The moment I remove the competing Dirichlet conditions (and implicitly replace them with natural boundary conditions), the singularities go away. Similarly, if the primal variable is made to be second order, then the two end-point LMs that previously corresponded to the two singularities have another primal dof to contribute to, and the singularities also disappear. For the "non-conforming" input file with aligning end-points, we don't see this same issue because @dewenyushu is applying Neumann conditions instead of competing Dirichlet ones. |
OK, that's an interesting result. I guess you'll need to be careful to avoid this issue in general problems as well since it seems like it could be a pretty common thing to have Dirichlet BCs on one or both "ends" of a mortar region? Possibly it could be detected when the lower-dimensional elements are initially added, or something along those lines... |
Nice findings! I got it that there are competing Dirichlet BC on the slave at the two ends. However, it is not clear to me how that causes a singularity, and how 'replacing it with a natural BC' will fix the issue.. @lindsayad would you mind explaining more in this regard? Or maybe we can talk offline if that is easier |
Get conforming and nonconforming cases to converge, checked Jacobian pattern Fix broken tests, code restructure, and add tests Address issue idaholab#15215
Remove test for the conforming mesh Rebase against devel Address issue idaholab#15215
d10f8fc
to
96d5ef8
Compare
@lindsayad this PR is finally ready for review! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is a really nice start, but I think we can refine the design in Assembly
@@ -151,6 +159,7 @@ class MortarConstraintBase : public Constraint, | |||
using ADMortarConstraint<compute_stage>::_phys_points_slave; \ | |||
using ADMortarConstraint<compute_stage>::_phys_points_master; \ | |||
using ADMortarConstraint<compute_stage>::_has_master; \ | |||
using ADMortarConstraint<compute_stage>::_use_dual; \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like I missed removing this macro. compute_stage
doesn't exist anymore. Can you remove this?
framework/src/base/Assembly.C
Outdated
FEShapeData * fesd = _fe_shape_data_dual_lower[fe_type]; | ||
|
||
fe_lower->set_calculate_dual(true); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I ran into a segmentation fault the other day when trying to use a mixed-order discretization with u/order=SECOND
and lm/order=FIRST
and lm/use_dual=true
. I suspect this is what caused it. (I actually don't think that's what caused it because I don't think you should have been creating any non-dual _fe_lower
s... 🤔) You don't necessarily have a non-NULL _fe_shape_data_dual_lower
for every _fe_lower
do you? An _fe_lower
is created anytime you call either buildLowerDDualFE
or buildLowerDFE
, but an _fe_shape_data_dual_lower
is only created when you call buildLowerDDualFE
. I can imagine an (admittedly very unlikely) scenario in which someone requests a standard LM and a dual LM of different orders and then you would be attempting to dereference a nullptr
. I think that you should do something like this to dodge the issue:
if (!fesd)
continue;
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You don't necessarily have a non-NULL
_fe_shape_data_dual_lower
for every_fe_lower
do you? An_fe_lower
is created anytime you call eitherbuildLowerDDualFE
orbuildLowerDFE
, but an_fe_shape_data_dual_lower
is only created when you callbuildLowerDDualFE
.
That is correct. The current implementation only gives non-NULL _fe_shape_data_dual_lower
only when the user choose to use the dual method.
I ran into a segmentation fault the other day when trying to use a mixed-order discretization with
u/order=SECOND
andlm/order=FIRST
andlm/use_dual=true
.
That is right. The dual method requires lm/order = u/order
use_dual=true
, otherwise there would be an issue. Where do you suggest I may add a warning about this, just in case someone uses different order for LM and u?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we even need to use this API at all? If I had noticed this in the libMesh PR, I probably would've asked for it to be removed. There's no precedent for it in the FE hierarchy. We should know whether to calculate_dual
based on the prerequests in buildLowerDDualFE
. This is how I think a single reinitLowerDElemRef
method should look:
void
Assembly::reinitLowerDElemRef(const Elem * elem,
const std::vector<Point> * const pts,
const std::vector<Real> * const weights)
{
mooseAssert(pts->size(),
"Currently reinitialization of lower d elements is only supported with custom "
"quadrature points; there is no fall-back quadrature rule. Consequently make sure "
"you never try to use JxW coming from a fe_lower object unless you are also passing "
"a weights argument");
_current_lower_d_elem = elem;
unsigned int elem_dim = elem->dim();
for (const auto & it : _fe_lower[elem_dim])
{
FEBase * fe_lower = it.second;
FEType fe_type = it.first;
fe_lower->reinit(elem, pts, weights);
if (FEShapeData * fesd = _fe_shape_data_lower[fe_type])
{
fesd->_phi.shallowCopy(const_cast<std::vector<std::vector<Real>> &>(fe_lower->get_phi()));
fesd->_grad_phi.shallowCopy(
const_cast<std::vector<std::vector<RealGradient>> &>(fe_lower->get_dphi()));
if (_need_second_derivative_neighbor.find(fe_type) != _need_second_derivative_neighbor.end())
fesd->_second_phi.shallowCopy(
const_cast<std::vector<std::vector<TensorValue<Real>>> &>(fe_lower->get_d2phi()));
}
if (FEShapeData * fesd = _fe_shape_data_dual_lower[fe_type])
{
fesd->_phi.shallowCopy(const_cast<std::vector<std::vector<Real>> &>(fe_lower->get_dual_phi()));
fesd->_grad_phi.shallowCopy(
const_cast<std::vector<std::vector<RealGradient>> &>(fe_lower->get_dual_dphi()));
if (_need_second_derivative_neighbor.find(fe_type) != _need_second_derivative_neighbor.end())
fesd->_second_phi.shallowCopy(
const_cast<std::vector<std::vector<TensorValue<Real>>> &>(fe_lower->get_dual_d2phi()));
}
}
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My concern was whether to reinitialize both dual and non-dual shape functions, as I imagine only one of them will be used in one simulation. If yes, then yeah I agree this would be a better way to combine both functions and remove this API
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You need to compute the non-dual shape functions anyways in order to compute the dual shape functions. If you have a completely non-dual simulation then get_dual_phi
will never get called, and fe_lower->reinit
won't do any dual related computation. I don't see any scenario in the code I posted where you will be wasting work, but maybe I'm missing something...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just checked after making the changes. You are right: for a completely non-dual simulation, get_dual_phi
is never called. So no additional computation will be caused due to the change. Very helpful comments! Appreciate it!
// Whether to use dual mortar. Same for all motrat constraints | ||
_use_dual = mc->use_dual(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm why do we have to do this? Now that I'm thinking about it, I can actually fairly easily imagine non-dual and dual LM calculations within the same simulation and with different orders for the LMs. Any simulation with thermal and mechanical contact. Thermal contact does not have any saddle point issues and hence does not have a clear reason for using dual; moreover, you get optimal convergence in the primal variable if you use an order for the thermal LM of (primal_order - 1). I think that we need to figure out a way not to do what you're doing here. (Also _use_dual
in ComputeMortarFunctor
would just end up corresponding to whatever the last mortar constraint's _use_dual
value was based on what you're doing here)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm why do we have to do this? Now that I'm thinking about it, I can actually fairly easily imagine non-dual and dual LM calculations within the same simulation and with different orders for the LMs. Any simulation with thermal and mechanical contact. Thermal contact does not have any saddle point issues and hence does not have a clear reason for using dual; moreover, you get optimal convergence in the primal variable if you use an order for the thermal LM of (primal_order - 1). I think that we need to figure out a way not to do what you're doing here. (Also
_use_dual
inComputeMortarFunctor
would just end up corresponding to whatever the last mortar constraint's_use_dual
value was based on what you're doing here)
The purpose was to pass in the use_dual
input parameter in order to initialize the dual/no-dual lowerD elements accordingly. I have not met cases where different orders for the LMs exist for one simulation (and not sure if that would necessarily be a common practice). So I just left a comment there and assumed sameuse_dual
value for all mortar constraints.
Not sure how to change it at this moment, but I probably can address issue while dealing with the ones you pointed out later in this PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have not met cases where different orders for the LMs exist for one simulation (and not sure if that would necessarily be a common practice).
See #13080 (comment). If using first order displacements, you will want to use a non-dual CONSTANT MONOMIAL for the thermal contact Lagrange multiplier. If using second order displacements, you would want to use a LAGRANGE FIRST for the thermal contact LM. Based on what you told me, you will be wanting to use an order equal to the displacement order for the mechanical contact LM. In any realistic BISON simulation you will have both thermal and mechanical contact and so you will have mixing of dual and non-dual and multiple LM orders.
if (_use_dual) | ||
_subproblem.reinitLowerDElemDualRef(slave_face_elem, &custom_xi1_pts); | ||
else | ||
_subproblem.reinitLowerDElemRef(slave_face_elem, &custom_xi1_pts); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be nice to just have a single reinitLowerDElemRef
call that would initialize both "standard" and "dual" data structures. I feel like we could do that...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be nice to just have a single
reinitLowerDElemRef
call that would initialize both "standard" and "dual" data structures. I feel like we could do that...
(Uh sorry I did not entirely get your point earlier...) I am not sure if we will need to initialize both of them as we probably will (mostly) only be using one of them..
But I do not mind initializing both as that would be a easier implementation :)
framework/src/problems/SubProblem.C
Outdated
void | ||
SubProblem::reinitLowerDElemDualRef(const Elem * elem, | ||
const std::vector<Point> * const pts, | ||
const std::vector<Real> * const weights, | ||
THREAD_ID tid) | ||
{ | ||
// - Set our _current_lower_d_elem for proper dof index getting in the moose variables | ||
// - Reinitialize all of our lower-d FE objects so we have current phi, dphi, etc. data | ||
assembly(tid).reinitLowerDElemDualRef(elem, pts, weights); | ||
|
||
// Actually get the dof indices in the moose variables | ||
systemBaseNonlinear().prepareLowerD(tid); | ||
systemBaseAuxiliary().prepareLowerD(tid); | ||
|
||
// With the dof indices set in the moose variables, now let's properly size | ||
// our local residuals/Jacobians | ||
assembly(tid).prepareLowerD(); | ||
|
||
// Let's finally compute our variable values! | ||
systemBaseNonlinear().reinitLowerD(tid); | ||
systemBaseAuxiliary().reinitLowerD(tid); | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea there's a lot of code duplication here. Again, I think we can design this such that a call to Assembly::reinitLowerDElemRef
initializes both standard and dual data structures
Job Precheck on 0650de2 wanted to post the following: Your code requires style changes. A patch was auto generated and copied here
Alternatively, with your repository up to date and in the top level of your repository:
|
…mputeMortarFunctor classes, issue idaholab#15215
0650de2
to
be92a2c
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah it looks so clean and beautiful! I love it. Great work!
@dewenyushu Looks like your PR caused some valgrind problems. Would you mind to go see if you can fix them? |
Will do. Looking into the problem now. |
@andrsd Emm...I thought it would be an easy fix but it turns out that I am not familiar with the valgrind testing system and have no clue where this memory issue comes from... would you mind giving me a hint on this? Thanks! |
|
Assembly::buildLowerDDualFE(FEType type) const | ||
{ | ||
if (!_fe_shape_data_dual_lower[type]) | ||
_fe_shape_data_dual_lower[type] = new FEShapeData; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This needs to be freed in Assembly::~Assembly()
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nailed it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can't we rather use smart pointers?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice catch!
Assembly::buildVectorDualLowerDFE(FEType type) const | ||
{ | ||
if (!_vector_fe_shape_data_dual_lower[type]) | ||
_vector_fe_shape_data_dual_lower[type] = new VectorFEShapeData; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And also this one.
Thanks for trying to help! |
Address part of the issues 2550 and #15215
Note: To be merged after this PR in libmesh.