-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Documenting experience with the Kolmogorov-to-grid scale ratio required for numerical stability #20
Comments
Definitely one of the many interesting things about 3D turbulence is how quickly the small scales show up as compared to the 2D one! I am still working on getting a good intuition for exactly why the numerics breaks (or doesn't) for a given situation |
Do you have a sense of why this is the case? It seems that, aside from the
question of accuracy, these subgrid models should be able to scale to
coarser grids and still remove sufficient grid-scale energy.
If you are finding stability only in regimes where the grid scale is only
~5 times Kolmogorov, it might be possible that the model is running
essentially as a DNS. That is, do the eddy viscosity terms matter to the
stability in those runs, or would the model run stably with no closure? I'm
thinking of a paper by Bill Smyth where he discusses DNS runs as often
having 3-6 x Kolmogorov scale for grid spacing (I can dig this up if
needed).
This is of interest to me, as I am still failing to get reasonable
geophysical type setups to run, regardless of closure type. It appears to
me that in these runs the eddy viscosity often takes on negative values
(despite the zero_max function). I wonder if this may be due to
under-resolving sharp features in the calculated eddy
viscosity/diffusivity. I've tried a test run increasing dealiasing, but
that doesn't seem to solve the problem. A spectral filter could be added in
addition to the eddy viscosity, but it would be nice to avoid this sort of
ad hoc choice.
Cheers,
Jacob
…On Thu, Mar 7, 2019 at 6:53 PM SandreOuza ***@***.***> wrote:
Definitely one of the many interesting things about 3D turbulence is how
quickly the small scales show up as compared to the 2D one! I am still
working on getting a good intuition for exactly why the numerics breaks (or
doesn't) for a given situation
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#20 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AGIDJ1lFpMPHgyQzBHCgTtC8h0Eg-Byqks5vUdCWgaJpZM4bkeDC>
.
|
It looks like the zero_max function in AMD is implementing a hard max rather than a soft max. This may be part of the problem -- computing the hard max on a certain grid may result in oscillations leading to negative values at different points. |
@kburns this wouldn't affect Can we use a spectral filter on the eddy viscosity/diffusivity itself, rather than the solution? |
Well I'm not sure which closure @wenegrat was referring to, but it doesn't look like we currently have any regularization on the constant smagorinsky eddy viscosity, right? |
Right @kburns there is no regularization on Smagorinsky; it is "guaranteed" > 0. So perhaps negative viscosity is only possible with regularization? Still puzzled how LES is ordinarily done with spectral methods. |
I think it's only guaranteed to be positive on the dealiased (scales=3/2) grid points, where it is calculated, but the spectral interpolant may end up being negative at other points, perhaps including the output grid (scales=1) if there are sharp features in the field. |
Yes, I have noticed similar behavior (negative eddy viscosity) with the
Smagorinsky closure as well, which as Greg says should be positive
definite. Am I the only one seeing this behavior?
Keaton, I understand your comments to imply that if I ran with no
dealiasing that the negative values should disappear, is this correct? I
can try to test this, although it might take me a few weeks to get to it.
…On Tue, Mar 12, 2019 at 5:40 PM Keaton J. Burns ***@***.***> wrote:
I think it's only guaranteed to be positive on the dealiased (scales=3/2)
grid points, where it is calculated, but the spectral interpolant may end
up being negative at other points, perhaps including the output grid
(scales=1) if there are sharp features in the field.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#20 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AGIDJ-C2liiP1zp-nHg1--xe9F4kKkDWks5vWEkSgaJpZM4bkeDC>
.
|
It's a little subtle -- during the simulation, the subgrid viscosity will be calculated on the grid, where it should be non-negative, and immediately multiplied by the other terms to produce the subgrid stress. However, if you designate that you want to save the subgrid viscosity as an output, there may be a round-trip transform to coefficient space and back to grid space before it is saved to disk. Even without dealiasing, this may slightly change the grid values since the Fourier Nyquist mode is dropped during these transforms, so the FFT is not quite exactly 1-to-1 (as an aside, this is a bit of a pain, but when you think about applying operators in coefficient space, it doesn't make much sense to keep and work with the Nyquist mode). So this may result in slightly negative values, on the order of the Nyquist mode amplitude. This may seem bad, but it's also the order-of-magnitude of the truncated terms, so its probably a decent estimate of how negative the truncated spectral interpolant may be between the grid points, anyways. |
@SandreOuza has found that
dedaLES
will run stably only if the ratio between the grid scaleΔ
and the Kolmogorov scale, defined aswhere
ν
is viscosity andε
is turbulent dissipation, is below a certain number (perhaps aroundΔ/ℓ = 5
). We should document this empirical rule. (Related to #19).The text was updated successfully, but these errors were encountered: