-
-
Notifications
You must be signed in to change notification settings - Fork 9.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ENH: added vector operators: divergence, curl and laplacian #6727
Changes from all commits
517a053
57d1147
dfae939
e673560
8ac65a7
87c19b8
e71dbcd
eeada1f
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -41,7 +41,8 @@ | |
'histogram', 'histogramdd', 'bincount', 'digitize', 'cov', 'corrcoef', | ||
'msort', 'median', 'sinc', 'hamming', 'hanning', 'bartlett', | ||
'blackman', 'kaiser', 'trapz', 'i0', 'add_newdoc', 'add_docstring', | ||
'meshgrid', 'delete', 'insert', 'append', 'interp', 'add_newdoc_ufunc' | ||
'meshgrid', 'delete', 'insert', 'append', 'interp', 'add_newdoc_ufunc', | ||
'_inplace_sum', 'divergence', 'curl', 'laplace' | ||
] | ||
|
||
|
||
|
@@ -1320,6 +1321,214 @@ def gradient(f, *varargs, **kwargs): | |
else: | ||
return outvals | ||
|
||
def _inplace_sum(items): | ||
""" | ||
Returns the sum of the elements of a list. | ||
Used in the follwing functions: | ||
divergence, | ||
|
||
.. versionadded:: 1.11.0 | ||
|
||
""" | ||
it = iter(items) | ||
total = next(it) | ||
for item in it: | ||
total += item | ||
return total | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think this can be written |
||
|
||
|
||
def divergence(v,*varargs): | ||
""" | ||
Return the divergence of an N-dimensional vector field of N | ||
components, each of dimension N. | ||
|
||
The divergence is computed using second order accurate central | ||
differences in the interior and either first differences or second | ||
order accurate one-sides (forward or backwards) differences at the | ||
boundaries. The returned gradient hence has the same shape as the | ||
input array. | ||
|
||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Should have |
||
Parameters | ||
---------- | ||
v : list of numpy arrays | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Wouldn't it make more sense to make this an array-like? Here, one can choose which dimension to be that of the axes; I think the first one is fine. In that case, one could just do
and it would even cover the case of having a list of numpy arrays. Another advantage would be that the tests for the sizes to be the same could be omitted, since that would be done implicitly already. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Hi,
I'm going to fix this and will implement your suggestion as well. |
||
Each of these array is the N-dimensional projection of the vector | ||
field on the corresponding axis. | ||
|
||
varargs : scalar or list of scalar, optional | ||
N scalars specifying the sample distances for each dimension, | ||
i.e. `dx`, `dy`, `dz`, ... Default distance: 1. | ||
single scalar specifies sample distance for all dimensions. | ||
if `axis` is given, the number of varargs must equal the number of axes. | ||
|
||
.. versionadded:: 1.11.0 | ||
|
||
Returns | ||
------- | ||
divergence : numpy array | ||
The output corresponds to the divergence of the input vector field. | ||
This means that the output array has the form | ||
dAx/dx + dAy/dy + dAz/dz + ... for an input vector field | ||
A = (Ax, Ay, Az, ...) up to N dimensions. | ||
|
||
Example | ||
------- | ||
This example shows how the calculated divergence of the 2-D field | ||
(0.5*x**2, -y*x) (whose divergence should be 0 everywhere) returns a 0 array | ||
|
||
>>> import numpy as np | ||
>>> X,Y=np.mgrid[0:2000,0:2000] | ||
>>> a1=0.5*X**2 | ||
>>> a2=-Y*X | ||
>>> c=[a1,a2] | ||
>>> d=np.divergence(c) | ||
>>> print d | ||
[[ 0. 0. 0. ..., 0. 0. 0.] | ||
[ 0. 0. 0. ..., 0. 0. 0.] | ||
[ 0. 0. 0. ..., 0. 0. 0.] | ||
..., | ||
[ 0. 0. 0. ..., 0. 0. 0.] | ||
[ 0. 0. 0. ..., 0. 0. 0.] | ||
[ 0. 0. 0. ..., 0. 0. 0.] | ||
|
||
""" | ||
|
||
N = len(v) | ||
axes = tuple(range(N)) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Is this a mistake? Given this, why is there a need to normalize the axes below? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes, you are right, I borrowed some code from np.gradient and forgot to remove the normalization below. I will do it now, thanks. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This seems overly long as well. Doesn't the below cover it?
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. p.s. Is there any reason not just to have There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes, I was thinking along similar lines. A trick you could use to avoid repeating the first line outside the loop is to make a dummy object to add in-place: class ZeroArray(object):
def __iadd__(self, other):
return other The first time you use
Then you could write: outvals = ZeroArray()
for axis in range(len(v)):
outvals += ... There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Neat. Though I fear it fails the Zen of Python... Actually, I think python's
This definitely beats everything else in readability, but, doing a quick test, is quite a bit slower than either of our approaches (which both are faster than first making a zero-filled array for large sizes) There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Indeed. The problem with
Maybe defining def inplace_sum(items, start=0):
total = start
for item in items:
total += item
return total There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Actually, in this case we want the result to be added in-place on the first object to avoid an unnecessary memory allocation. So really, we need something like this: def inplace_sum(items):
it = iter(items)
total = next(it)
for item in it:
total += item
return total There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes, that makes sense -- I can see how for a generic |
||
out = _inplace_sum(gradient(v[ax], *varargs, axis=ax, edge_order=0) | ||
for ax in axes) | ||
return out | ||
|
||
def curl(v,*varargs): | ||
""" | ||
Return the curl of a 3-D vector field. | ||
|
||
The curl is computed using second order accurate central differences | ||
in the interior and either first differences or second order accurate | ||
one-sides (forward or backwards) differences at the boundaries. The | ||
returned gradient hence has the same shape as the input array. | ||
|
||
Parameters | ||
---------- | ||
v : list of numpy arrays | ||
Each of these array is the N-dimensional projection | ||
of the vector field on the corresponding axis. | ||
|
||
varargs : scalar or list of scalar, optional | ||
N scalars specifying the sample distances for each dimension, | ||
i.e. `dx`, `dy`, `dz`, ... Default distance: 1. | ||
single scalar specifies sample distance for all dimensions. | ||
if `axis` is given, the number of varargs must equal the number of axes. | ||
|
||
.. versionadded:: 1.11.0 | ||
|
||
Returns | ||
------- | ||
curl : list of numpy arrays | ||
The output corresponds to the curl of the input 3-D vector field. | ||
This means that the output has the form | ||
(dAz/dy-dAy/dz, dAx/dz-dAz/dx, dAy/dx-dAx/dy) of an input vector | ||
field A = (Ax, Ay, Ax). | ||
|
||
Example | ||
------- | ||
The following example shows in matplotlib how the vector | ||
field (0,0,x*y) has the correct value, nonzero only in the | ||
third component of the resulting vector field after applying curl | ||
|
||
>>> import matplotlib.pylab as plt | ||
>>> import numpy as np | ||
>>> X,Y,Z=np.mgrid[0:200,0:200,0:200] | ||
>>> a0=X*Y | ||
>>> a1=np.zeros(np.shape(X)) | ||
>>> a2=np.zeros(np.shape(X)) | ||
>>> a=[a0,a1,a2] | ||
>>> [cx,cy,cz]=np.curl(a) | ||
>>> plt.imshow(cx[:,:,1]) | ||
>>> plt.colorbar() | ||
>>> plt.show() | ||
>>> plt.imshow(cy[:,:,1]) | ||
>>> plt.colorbar() | ||
>>> plt.show() | ||
>>> plt.imshow(cz[:,:,1]) | ||
>>> plt.colorbar() | ||
>>> plt.show() | ||
|
||
""" | ||
|
||
if len(v) != 3: | ||
raise ValueError("Enter a list of 3 arrays") | ||
|
||
outvals = [gradient(v[(ax+2) % 3], *varargs, axis=(ax+1) % 3, | ||
edge_order=0) - | ||
gradient(v[(ax+1) % 3], *varargs, axis=(ax+2) % 3, | ||
edge_order=0) | ||
for ax in range(3)] | ||
return outvals | ||
|
||
def laplace(f, *varargs): | ||
""" | ||
Return the laplacian of input. | ||
|
||
Computes the laplacian of an N-dimensional numpy array (scalar | ||
field), or of a list of N N-dimensional numpy arrays (vector | ||
field) as the composition of the numpy functions gradient and | ||
divergence. In the case of an input vector field, the result | ||
is the vector laplacian, in which a list of the laplacian of | ||
each component is returned. | ||
For more information on the computations, type help(gradient) or | ||
help(divergence) | ||
|
||
Parameters | ||
---------- | ||
f : array_like | ||
Input array | ||
|
||
.. versionadded:: 1.11.0 | ||
|
||
Returns | ||
------- | ||
l : array_like | ||
Laplacian of input. If the input is a scalar field f | ||
(single numpy array) the output is (d^2(f)/dx_0^2 + | ||
d^2(f)/dx_1^2 + d^2(f)/dx_2^2 + ...), where d^2(f)/dx_i^2 | ||
refers to the second derivative of f with respect to x_i. | ||
If the input is a scalar field the output is a list of | ||
numpy arrays, being each of them the laplacian operator | ||
applied to each of the input vector field components f_i | ||
|
||
See Also | ||
-------- | ||
gradient, divergence | ||
|
||
Example | ||
------- | ||
This example returns an array that grows linearly in the X axis | ||
after applying the laplacian function to the array X**3+Y**2+Z**2: | ||
|
||
>>> X,Y,Z=np.mgrid[0:200,0:200,0:200] | ||
>>> f=X**3+Y**2+Z**2 | ||
>>> d=np.laplace(f) | ||
>>> plt.imshow(d[:,:,1]) | ||
>>> plt.colorbar() | ||
>>> plt.show() | ||
|
||
""" | ||
if type(f) == np.ndarray: | ||
l = divergence(gradient(f,*varargs, edge_order=0),*varargs) | ||
elif type(f) == list: | ||
if not all([np.shape(f[0]) == np.shape(f[i]) for i in range(len(f))]): | ||
raise TypeError("All components of the input vector field " | ||
"must be the same shape") | ||
else: | ||
l = [] | ||
for i in range(len(f)): | ||
l.append(divergence(gradient(f[i], | ||
*varargs, edge_order=0)),*varargs) | ||
|
||
else: | ||
raise TypeError("Please, enter a numpy array or a list of" | ||
" numpy arrays of the same shape") | ||
return l | ||
|
||
def diff(a, n=1, axis=-1): | ||
""" | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Blank line before
"""
.