Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make all template specialization of FPE compile for float #14155

Closed
wants to merge 1 commit into from

Conversation

peterrum
Copy link
Member

as reported by exadg/exadg#252

FYI @jh66637

Comment on lines +291 to +293
template <typename Number2>
inline DerivativeForm<order, dim, spacedim, Number>::
operator Tensor<order + 1, dim, Number>() const
operator Tensor<order + 1, dim, Number2>() const
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This doesn't sound quite right to me. Shouldn't the operator just return a Tensor representation of what it stores internally, and then if someone wants this in another type they can later cast from that tensor to one with a different scalar?

Note that that's basically what you do in the implementation, which now looks as follows:

template <int order, int dim, int spacedim, typename Number>
template <typename Number2>
inline DerivativeForm<order, dim, spacedim, Number>::
operator Tensor<order + 1, dim, Number>() const
{
  Assert((dim == spacedim), ExcMessage("Only allowed when dim==spacedim."));

  Tensor<order + 1, dim, Number> t;       <<<<<<<<<<<<<<<<<< old tensor type
  if (dim == spacedim)
    for (unsigned int j = 0; j < dim; ++j)
      t[j] = (*this)[j];
  return t;                                        <<<<<<<<<<<< cast to Tensor<....,Number2> happens here
}

My preference would be to make the place that calls this function do the cast at the calling site, not internally here.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tend to agree with @bangerth here, since this only converts the end result, I would prefer doing that explicitly in FEPointEvaluation (I was confused by the abbreviation FPE, which I read as floating point exception).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That the tensor used the old number- type was not indended:

Tensor<order + 1, dim, Number> t;       <<<<<<<<<<<<<<<<<< old tensor type

I have adjusted that.

Since the operator() is for casting, the casting is already done explicitly done in FEPointEvaluation:

gradients[i + j] = static_cast<gradient_type>(
apply_transformation(mapping_info->get_mapping_data()
.inverse_jacobians[i + j]
.transpose(),
unit_gradients[i + j]));

The alternative would to cast from DerivativeForm to typename internal::FEPointEvaluation::EvaluatorTypeTraits<dim, n_components, double>::gradient_type (please note that one has to use double instead of Number in this long expression) and after that cast it to a float version (or do an assignment, which seems to broke in Tensor).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think what I meant to say is this: I consider the cast operator as saying "give me a tensor representation of this object", the emphasis being on "representation of this object" rather than "give me". The emphasis here is on the object. As a consequence, a DerivativeForm<...,double> should return a Tensor<...,double>.

In contrast, your implementation is for the question "I want to use this object as an X", in which the emphasis is on want to use -- a caller side consideration. I would prefer an explicit cast to Tensor<...,Number2> in such situations.

@peterrum
Copy link
Member Author

Closing in favor of #14155. Whatever the outcome will be there.

@peterrum peterrum closed this Jul 30, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants