Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add FEEvaluationImplBasisChange::do_backward_hessians #10869

Merged
merged 2 commits into from
Sep 4, 2020

Conversation

peterrum
Copy link
Member

@peterrum peterrum commented Sep 1, 2020

Extracted from #10830. Needed for applying the inverse matrix in the ECL context.

Comment on lines 901 to 910
do_backward_hessians(
const unsigned int n_components,
const dealii::AlignedVector<Number2> &transformation_matrix,
const bool add_into_result,
Number * values_in,
Number * values_out,
const unsigned int basis_size_1_variable =
dealii::numbers::invalid_unsigned_int,
const unsigned int basis_size_2_variable =
dealii::numbers::invalid_unsigned_int)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I must admit that I do not like to duplicate the do_backward function for the sake of bypassing the assumption of a zero somewhere in the values call in that function. We should make the interfaces uniform by just providing an apply function in the tensor product evaluators with an enum informing us about special cases, see also #7039.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that we should try not to introduce an additional specialization for hessian. But let's not remove the values/gradients/hessians-function calls now from all places (I can put it on my TODO list). I have introduced as suggested an enum (name is up to discussion) and use this now in the FEEvaluationImplBasisChange class. What do you think, @kronbichler?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Absolutely, there is no need to clean things up now, neither need we remove the values/gradients/hessian calls now. But I was thinking of providing all evaluators with a static void apply function and uniform template parameters along the variants. Or, even better, there is a single code to run through the tensor products and just different 1D matrix-vector products.

Copy link
Member

@kronbichler kronbichler left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess this is fine now; it does not immediately help now but it will once we have clearer interfaces elsewhere.

@kronbichler kronbichler merged commit e16f8d5 into dealii:master Sep 4, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants