-
Notifications
You must be signed in to change notification settings - Fork 707
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add FEEvaluationImplBasisChange::do_backward_hessians #10869
Conversation
b216381
to
dc4e443
Compare
do_backward_hessians( | ||
const unsigned int n_components, | ||
const dealii::AlignedVector<Number2> &transformation_matrix, | ||
const bool add_into_result, | ||
Number * values_in, | ||
Number * values_out, | ||
const unsigned int basis_size_1_variable = | ||
dealii::numbers::invalid_unsigned_int, | ||
const unsigned int basis_size_2_variable = | ||
dealii::numbers::invalid_unsigned_int) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I must admit that I do not like to duplicate the do_backward
function for the sake of bypassing the assumption of a zero somewhere in the values
call in that function. We should make the interfaces uniform by just providing an apply
function in the tensor product evaluators with an enum
informing us about special cases, see also #7039.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree that we should try not to introduce an additional specialization for hessian. But let's not remove the values
/gradients
/hessians
-function calls now from all places (I can put it on my TODO list). I have introduced as suggested an enum (name is up to discussion) and use this now in the FEEvaluationImplBasisChange
class. What do you think, @kronbichler?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Absolutely, there is no need to clean things up now, neither need we remove the values/gradients/hessian calls now. But I was thinking of providing all evaluators with a static void apply
function and uniform template parameters along the variants. Or, even better, there is a single code to run through the tensor products and just different 1D matrix-vector products.
85cba06
to
a9a0e44
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess this is fine now; it does not immediately help now but it will once we have clearer interfaces elsewhere.
Extracted from #10830. Needed for applying the inverse matrix in the ECL context.