Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DOC: gradient uses 1st order central difference in the interior #8605

Merged
merged 1 commit into from
Feb 12, 2017

Conversation

drabach
Copy link
Contributor

@drabach drabach commented Feb 11, 2017

For the gradient function for array-likes I corrected the documentation and the comments. The implementation was correct just the documentation stated that the second order central difference would be used. Instead the first order difference is used (like it should be).

@charris charris changed the title gradient uses 1st order central difference in the interior DOC: gradient uses 1st order central difference in the interior Feb 12, 2017
@charris charris merged commit 5de1a82 into numpy:master Feb 12, 2017
@charris
Copy link
Member

charris commented Feb 12, 2017

Thanks @drabach. For future reference, follow the commit message template in doc/source/dev/gitwash/development_workflow.rst

@apbard
Copy link
Contributor

apbard commented Feb 12, 2017

Beside the typo 2st I think that the docs were right before. The function is using central finite differences with a 3 point stencil. That scheme, for uniformly spaced data, is actually of 2nd order.

@drabach
Copy link
Contributor Author

drabach commented Feb 12, 2017

What do you mean with "3 point stencil"? A central difference (first order or second order) always uses 3 points - that's why it is called "central". But that does not refer to the order.

To explain the problem with the documentation I have as a user of this function:
In the example one has the vector
x = np.array([1, 2, 4, 7, 11, 16], dtype=np.float)
and the gradient function gives
array([ 1. , 1.5, 2.5, 3.5, 4.5, 5. ])

To have the first value here it is (2-1)/1 = 1 and the second value is (4-1)/2=1.5. I assume like stated in the documentation that the equal spaced difference is one - so at the beginning I divide by 1 and in the middle I have to divide by 2 and so on - finite differences of first order in both cases?

@apbard
Copy link
Contributor

apbard commented Feb 12, 2017

@drabach the number of points used to compute the approximation is usually referred as "stencil". In this case, for central difference we are using indexes i-1, i , i+1 therefore a "3 point stencil".
Current implementation of gradient uses in the interior points the approximation f'(x_i) = (f(x_i + h) - f(x_i - h)) / 2h that has a truncation error that is O(h^2). Thus, it is a second order approximation.
On the boundaries you can still use the 3 stencil point off-centered approximation (still 2nd order) or use the forward/backward 2-point approximations which are only 1st order (i.e. have a truncation error of O(h) )
I am extending a bit the documentation of gradient in PR #8446 and I hope it will also make this topic a bit more clear.

@drabach
Copy link
Contributor Author

drabach commented Feb 12, 2017

When you update the documentation maybe some reference for the definition of the terms "first order" and "second order" difference would clarify things. When I use "my" definition for second order differences I get different results for the gradient function, namely :
For x = np.array([1, 2, 4, 7, 11, 16], dtype=np.float) I get [1, 1, 1, 1, 1, 1]. Formula used: (f(x+h)-2f(x) + f(x-h))/h^2 with h=1.
Wikipedia defines it the same - but maybe that is not appropriate here either.

@apbard
Copy link
Contributor

apbard commented Feb 12, 2017

ok, now I see what you are saying and we are both right, we are just using the term order for different things.
Let's put this way, the general problem we are trying to solve is to "approximate the n-th (order) derivative of function f using a finite difference scheme with a k-th order accuracy".
Now, np.gradient is computing the 1st derivative using a 2nd order central finite difference scheme. And at the boundaries it can use either a 2nd order forward/backward scheme or a 1st order one.
The formula you have posted is the 2nd order central finite difference approximation of the 2nd derivative of f.
In other words, you are using the term "order" to refer to the n-th derivative. np.gradient it is using it to refer to the accuracy of the scheme while the term "gradient" implies 1st derivative.

(IMHO the wikipedia page it is not very clear on this since it is mixing the two things).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants