-
Notifications
You must be signed in to change notification settings - Fork 186
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implementing closures for large eddy simulation #120
Comments
To me this looks like implementing arbitrary functions to replace a specific term in the momentum equations, which is kind of similar to #73 #85.
Is a closure always meant to replace the viscous dissipation (𝜈∇²u) operators? Will Smag/AMD replace the Laplacian diffusion operators (e.g. κ∇²T) as well?
Does this apply to constant/dynamic Smagorinsky or AMD?
Out of curiousity, can these closures be framed as
Makes sense to me! If it's an arbitrary function then we should be able to implement any closure (or no closure which should produce inviscid flow?).
With finite volume the interpolation operators ( Sounds like #115 (and #59 (comment)) should be resolved before this closure abstraction is merged in? |
I suppose we can interpret every term in the momentum equation as an "arbitrary function". The design I've been using is to define a type that represents part of the governing equation to make the equation modular and changeable by the user. Definitely suggest alternative designs if good alternatives come to mind.
I am proposing that we consider using this conceptual model for a "closure". Perhaps a better name than "closure" is
A subgrid turbulent kinetic energy variable in a typical turbulence closure obeys a three-dimensional PDE and is advected, diffused, dissipated, and interacts with terms in the momentum and tracer equation via nonlinear terms. It is a first-class solution variable similar to the velocity field or active tracers. See equation 12 in Moeng 1984.
I think it makes sense to resolve #107, #115, and #59 before implementing anything; especially #59 because this affects the function signature we use for the closures. |
@SandreOuza @christophernhill @jm-c will be good to have your feedback too! |
Oh no I think the design is great, was just saying that it's similar to what we're doing for forcing functions so implementing arbitrary closures should be pretty straightforward (more so than implementing boundary conditions #118). Of course, implementing specific closures may pose a challenge.
This makes sense to me. Running without a "closure" should be using a molecular viscosity and diffusivity (which isn't practical so we're always using a closure). I'm not super familiar with the literature but sounds like "turbulent diffusion" is what is being modeled.
Thanks for the reference, I see what you mean now. I can see how an
Definitely agree on #59 since that will change a lot of function call signatures. #107 and #115 are more backend related and shouldn't change how the functions are called so aren't as urgent I think (still high priority although). |
quite!
Just know that we will want to run some simulations at high resolution without closures in order to test the accuracy of the coarse model + closure. |
Wouldn't that involve resolving all scales down to the Kolmogorov scale? So this would be for tiny ~1 m³ domains? |
Yup. |
After thinking about LES closures a bit more, I think your Just had a quick look at Smagorinsky in the MITgcm manual and I'm pretty sure no intermediate storage is required. For example, element-wise horizontal Smagorinsky for momentum would roughly look like @inline smagorinsky_νh(grid, u, v) = (νS*Δ)^2 * sqrt((δx_f2c(u)/Δx - δy_f2c(v)/Δy)^2 + (δy_f2e(u)/Δy + δx_f2c(v)/Δx)^2) I think doing anisotropic minimum dissipation (AMD) would be similar (just more terms), and might make sense to do all the computations on-the-fly with no intermediate computations to reduce memory usage on a GPU. Not actually working on this issue. Just procrastinating and thinking about it. |
@glwagner Do you want to close this issue? |
Word. |
A high priority on the to-do list right now is writing LES closures. I'd like to discuss the implementation here before we open a PR.
I propose that we view a "closure" as the addition of a viscosity/diffusivity, even a constant isotropic 'molecular' diffusivity. Moving the implementation of viscosity/diffusivity to the closure will make it less painful to write closures that modify boundary conditions, as they sometimes do. In addition, we may need to implement special time-stepping methods that apply both to LES closures and diffusivities (as discussed with @jm-c, @ali-ramadhan and @SandreOuza and in #95).
We may also want/need to implement LES closures that add additional 'tracers' (like subgrid scale turbulent kinetic energy) that have their own evolution equations. We'll do the easy 'eddy diffusivity' closures first, but we must keep in mind that including optional evolution equations for closure-specific tracers requires an implementation of an abstraction for equations (as discussed in #110).
For now, (and for example) we can implement something like
and add a field to
Model
calledclosure
(perhaps). The fields ofTurbulentDiffusivity
are functions that compute stress/flux divergence for momentum and tracers ati
,j
,k
(and we need them to have a common function signature to call in the time-stepping loop). In the case of constant but anisotropic diffusivity (which is currently implemented), for example, we could haveand then, in the time_stepping,
does that make sense? Please criticize. The LES closures will involve much more complicated functions.
LES closures like "Constant Smagorinsky" and "Anisotropic Minimum Dissipation" (AMD) fall into the category of "eddy diffusivity closures", in that the closure is expressed as a locally evaluated nonlinear eddy viscosity and diffusivity (see also the dedaLES documentation for AMD). As discussed long ago with @jm-c, the implementation of an eddy diffusivity closure on a staggered grid requires computing the eddy viscosity at cell centers and on all three faces. In our new notation, these four locations are
(Center, Center, Center)
(Interface, Center, Center)
(Center, Interface, Center)
(Center, Center, Interface)
In order to calculate the stress divergence for eddy viscosities, therefore, we need the eddy viscosity at
(Center, Center, Center)
. On the other hand, to compute tracer fluxes we need the eddy diffusivity at the three other locations. Something to consider. @jm-c may have more to say on the matter. Looking at the AMD formulas it seems that some averaging/interpolation is necessary to calculate the terms that contribute to the eddy viscosity in the correct locations.The text was updated successfully, but these errors were encountered: