Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New kernels to increase model flexibility #11830

Merged
merged 1 commit into from
Aug 10, 2018

Conversation

itgreenquist
Copy link
Contributor

Introduces two new phase field kernels and modifies one existing kernel so that kappa and mu can be implemented as constants.
Also includes one new test which covers both new kernels, and documentation for the kernels.

Closes #11829

@moosebuild
Copy link
Contributor

moosebuild commented Jul 11, 2018

Job Documentation on 8ffa755 wanted to post the following:

View the site here

This comment will be updated on new commits.

@itgreenquist itgreenquist changed the title New kernels to increase model flexibility. Closes #11829 New kernels to increase model flexibility Jul 12, 2018
@itgreenquist
Copy link
Contributor Author

The test that failed in Test Intel is the new one I submitted in this PR. I've double checked and it passes consistently on my computer, fails consistently in Test Intel, and passes in Test debug.

@permcody, what does that mean? Should I modify the test?

@permcody
Copy link
Member

Numerics are just different between machines, compilers, etc. Take a look closely at that exodiff and read it. What it tells you is which value is different and by how much. In this case the "gold" value and your value from the test are both very tiny (1e-9 or smaller). The problem is that relative differences at that scale tend to get magnified a bit. There are several solutions that can be applied to this problem.

Not all of these can be applied in all situations:

  1. Solve to an even tighter tolerance
  2. Adjust your comparison tolerances
  • Relative tolerance can be changed (See Exodiff tester parameters)
  • Absolute tolerance can be changed (values under which comparisons are ignored)
  1. Avoid physics where the solution contains such tiny values (not always possible)
  2. Don't compare fields that are not of interest (not always a good idea).
  3. Skip specific compilers, or platforms (generally a bad idea)

I'd work with suggestion #1 or #2. Let me know if you need further guidance.

@itgreenquist
Copy link
Contributor Author

@dschwen have you had a chance to look at this?

@friedmud
Copy link
Contributor

Pinging @dschwen

term1;
}

Real ACKappaFunction::computeFg() // gradient energy term
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comment belongs in the header (with ///)

(_u[_qp] * _u[_qp] + sum_eta_j2) * (*_vals[i])[_qp] * (*_vals[i])[_qp] * _gamma[_qp];
} // for (unsigned int i = 0; i < _n_eta; ++i)
return 0.25 + var_phase + eta_phase + eta_interface;
} // Real ACBarrierFunction::calculate_f0()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These comments are not necessary for blocks that fit comfortably on the screen

std::vector<const MaterialProperty<Real> *> _d2mudvardeta;

private:
Real calculate_f0();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Naming convention

* Several kernels use a material property called mu. If mu is not a constant,
* then this kernel will calculate the bulk AC term where mu is the derivative term.
* It currently only takes a single value for gamma.
**/
Copy link
Member

@dschwen dschwen Jul 30, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please move this directly above the class declaration

It should not be necessary to use multiple $\gamma$'s in models with a non-constant
$m$.

### Inputs
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You probably mean "Input Parameters". This section is not necessary, the parameter descriptions are extracted from the executable. Just make sure your addParam etc. calls have proper doc strings.

validParams<ACBarrierFunction>()
{
InputParameters params = validParams<ACGrGrBase>();
params.addRequiredParam<MaterialPropertyName>("gamma", "The gamma value to use with the kernel");
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That doc string is not useful at all! Please spell out the physical significance of gamma.

case Residual:
{
return _dmudvar[_qp] * calculate_f0();
} // case Residual:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

braces not needed here

sum_eta_j2 = 0.0;
for (unsigned int j = i + 1; j < _n_eta; ++j)
sum_eta_j2 += (*_vals[j])[_qp] * (*_vals[j])[_qp];
eta_phase += 0.25 * (*_vals[i])[_qp] * (*_vals[i])[_qp] * (*_vals[i])[_qp] * (*_vals[i])[_qp] -
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please use Utility::pow<> here

_kappa_name(getParam<MaterialPropertyName>("kappa_name")),
_dkappadvar(getMaterialPropertyDerivative<Real>(_kappa_name, _var_name)),
_d2kappadvar2(getMaterialPropertyDerivative<Real>(_kappa_name, _var_name, _var_name)),
_nv(coupledComponents("v")),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is almost always called _op_num.

Copy link
Member

@dschwen dschwen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See comments above


Real grad_var2 = _grad_u[_qp] * _grad_u[_qp];
return sum_grad_etai2 + grad_var2;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wait!! Why is this sum over all the gradient variables? In general, kappa values corresponding to different coupled variables can be different, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The idea with the kernel is that you are using a functional kappa value instead of different kappas for each variable. It achieves a similar effect to using multiple kappas, but with transition regions. I can't think of a scenario where someone would want multiple functional kappas, but if they did they could try being selective on what variables to include in "v".

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here is what I am trying to understand, the gradient energy term is
grad1
and you are implementing its derivative w.r.t the order parameters. So, shouldn't the generic implementation be
codecogseqn 2
instead of
codecogseqn 1 ?
It assumes same kappa function for all the coupled order parameters, cases such as polycrystal solidification can have different kappa functions.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That would be the most generic case, but

  1. it would be a huge headache for me to rewrite the kernel
  2. the UI for the kernel would be more complicated and the kernel harder to use
  3. the feature would (almost) never be used,
  4. in the rare case that it is needed, you can achieve the same effect by using the kernel multiple times with a single coupled eta each time, effectively doing the sum yourself.

I looked up a couple of polycrystal solidification papers and could not find one with multiple functional kappas. One used multiple constant kappas, and another used a single functional kappa. Can you send a link to one with multiple functional kappas? I'm curious about the scenario where that would be desirable.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand. I am working on something similar to what you have found and intended to implement some associated kernels (#11838). It's okay, I can work around it for now. I was curious to know why the kernel is not implemented in the generic form. Thanks for the clarification.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This kernel could be expanded in a later PR (even by you @SudiptaBiswas)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I would work on it later.

ACKappaFunction::computeQpOffDiagJacobian(unsigned int jvar)
{
const unsigned int i = mapJvarToCvar(jvar);
Real PreJac = 0.5 * _test[_i][_qp] * _phi[_j][_qp] * computeFg();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Naming convention (being pedantic)

std::vector<const MaterialProperty<Real> *> _d2kappadvardv;

private:
Real computeFg();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably the method names can be a little bit more informative! Just a suggestion.

@itgreenquist itgreenquist force-pushed the MooseKernels branch 2 times, most recently from 6beed4c to d49b510 Compare July 31, 2018 17:33
@friedmud
Copy link
Contributor

friedmud commented Aug 8, 2018

@SudiptaBiswas do you believe you've satisfied all the comments?

@dschwen can you take a second look here?

@SudiptaBiswas
Copy link
Contributor

@friedmud I did not work in this PR, it's entirely @itgreenquist. I just had a few comments that we discussed. I will work on the expansion later in a separate PR.

@itgreenquist
Copy link
Contributor Author

@friedmud I've addressed all of the comments.

@friedmud
Copy link
Contributor

friedmud commented Aug 8, 2018

Ok - so we just need @dschwen to take a look and merge it then

@dschwen dschwen merged commit e5790f8 into idaholab:devel Aug 10, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants