Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

converted gradient function to standard forward difference #68

Closed
wants to merge 1 commit into from

Conversation

tomflint22
Copy link

@tomflint22 tomflint22 commented Jun 27, 2019

The proposed changes are intended to {improve the numerics of the gradient function. Currently the gradient is kind of a sum of forward and backwards fifferences: a(x+1)-a(x-1)/2dx. I think this doubles the error}.

This is achieved by {I have converted the gradients to a standard forward difference}.

Fixes #, Addresses #

The modified code may have unintended consequences in {I dont forsee any unintended consequences. The old method may have performed some unintended averaging where used so we should look out for this}.


This change is Reviewable

@tkphd
Copy link
Contributor

tkphd commented Jun 28, 2019

Thanks again for the contribution, @tomflint22. This is much more clear.

The Numerical Recipes chapter on Partial Differential Equations covers this in some detail. The bottom line is, neither the current implementation (FTCS) nor your proposed finite difference are numerically stable. Instead, the "correct" implementation would use the Lax method, which replaces the center point of the previous timestep with its average.

The following consider the conservative flux equation

dT
-- = -α ∇T
dt

FTCS

The current gradient function implements forward-time centered-space (FTCS), which uses neighboring data to compute the gradient rather than one-sided differencing. Note that "forward" and "backward" refer to time, not space.

T(j,n+1) - T(j,n)      T(j+1,n) - T(j-1,n)
----------------- = -α -------------------
        Δt                     2Δx

NR shows, through a von Neumann or linear stability analysis, that this discretization is unstable for all Δt.

Forward-time One-sided-space

Your proposed gradient function implements a one-sided difference. This is sometimes appropriate, especially near boundaries, but offers no benefit over FTCS in terms of accuracy or stability.

T(j,n+1) - T(j,n)      T(j+1,n) - T(j,n)
----------------- = -α -----------------
        Δt                     Δx

Lax method

NR reports

the instability in the FTCS method can be cured by... replac[ing] the term T(j,n) by its average

T(j,n) --> (T(j+1,n) + T(j-1,n)) / 2

Then,

T(j,n+1) - (T(j+1,n) + T(j-1,n)) / 2      T(j+1,n) - T(j,n)
------------------------------------ = -α -----------------
        Δt                                        Δx

This discretization is conditionally stable, so long as Δx ≥ αΔt.

Resolution

If you update this PR to implement the Lax method, then I will be happy to merge it. Otherwise, let me know, and I will implement it myself.

I should also note that the centered-space discretization is appropriate (with second-order spatial truncation error) for computing time-independent quantities, such as the current free energy of a system. In solving things like the heat equation, dT/dt=α∇²T, you don't use the gradient function directly, but rather a discretized Laplacian function. If your α varies in space, you'll have to implement a custom Laplacian to handle that.

@tomflint22
Copy link
Author

I see, thankyou for clarifying. The current gradient implementation was falling over for me in the heterogeneous heat conduction example. At the interface of a region of higher thermal diffusivity. I used the product rule to break the div(alpha*grad(T)) term up into d(alpha)/dx dT/dx (in one dimension), so taking the gradient of essentially a step function was generating artifacts over two grid points.
If this is a pain to implement maybe it's best to just leave it as it is and I'll just write a custom gradient on my repo.

When I get some time next week though I'll have a go at implementing the LAX method you have suggested.

Once again thankyou for all your hard work with mmsp. I'd love to contribute.

All the best,

Tom

@tomflint22
Copy link
Author

Sorry I wrote that equation out wrong for heterogeneous conduction: d(alpha)/dxdT/dx + alpha(d2T/dx2)

@tkphd
Copy link
Contributor

tkphd commented Jun 28, 2019

OK. The typical approach for implementing div(α.grad T) with variable α is to take the average of α for each sub-difference. With constant α, you have α * (upper_difference - lower_difference): in 1D,

∇·(α∇T) = α∇²T
= [α(T_{j+1}-T_{j})/(Δx) - α(T_{j}-T_{j-1})/(Δx)]/(Δx)
= α/(Δx)²[T_{j+1}-2T_{j}+T_{j-1}]

where

q_{upper} = α * (T_{j+1}-T_{j})/(Δx)
q_{lower} = α * (T_{j}-T_{j-1})/(Δx).

For variable α,

q_{upper} = 0.5*(α_{j+1}+α_{j}) * (T_{j+1}-T_{j})/(Δx)
q_{lower} = 0.5*(α_{j-1}+α_{j}) * (T_{j}-T_{j-1})/(Δx)
∇·(α∇T) = (q_{upper} - q_{lower}) / (Δx)

Hopefully, implementing this discretization will stabilize your code.

@tomflint22
Copy link
Author

Ace, thankyou. That's really useful.

I'll keep trying to contribute, I love using your code it's great.

All the best,

Tom

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants