New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ENH: added vector operators: divergence, curl and laplacian #6727
Changes from 4 commits
517a053
57d1147
dfae939
e673560
8ac65a7
87c19b8
e71dbcd
eeada1f
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -1320,6 +1320,258 @@ def gradient(f, *varargs, **kwargs): | |
else: | ||
return outvals | ||
|
||
def divergence(v,*varargs,**kwargs): | ||
""" | ||
Return the divergence of an N-dimensional vector field of N | ||
components, each of dimension N. | ||
|
||
The divergence is computed using second order accurate central | ||
differences in the interior and either first differences or second | ||
order accurate one-sides (forward or backwards) differences at the | ||
boundaries. The returned gradient hence has the same shape as the | ||
input array. | ||
|
||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Should have |
||
Parameters | ||
---------- | ||
v : list of numpy arrays | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Wouldn't it make more sense to make this an array-like? Here, one can choose which dimension to be that of the axes; I think the first one is fine. In that case, one could just do
and it would even cover the case of having a list of numpy arrays. Another advantage would be that the tests for the sizes to be the same could be omitted, since that would be done implicitly already. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Hi,
I'm going to fix this and will implement your suggestion as well. |
||
Each of these array is the N-dimensional projection of the vector | ||
field on the corresponding axis. | ||
|
||
varargs : scalar or list of scalar, optional | ||
N scalars specifying the sample distances for each dimension, | ||
i.e. `dx`, `dy`, `dz`, ... Default distance: 1. | ||
single scalar specifies sample distance for all dimensions. | ||
if `axis` is given, the number of varargs must equal the number of axes. | ||
edge_order : {1, 2}, optional | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This seems to not actually be implemented/make a difference currently. Which also points to an unfortunate huge chunk of work, but it can wait for a while, just needs patience. And that is, we need a lot of tests after hashing out what we want. |
||
Gradient is calculated using N\ :sup:`th` order accurate differences | ||
at the boundaries. Default: 1. | ||
|
||
|
||
Returns | ||
------- | ||
divergence : numpy array | ||
The output corresponds to the divergence of the input vector field. | ||
This means that the output array has the form | ||
dAx/dx + dAy/dy + dAz/dz + ... for an input vector field | ||
A = (Ax, Ay, Az, ...) up to N dimensions. | ||
|
||
Example | ||
------- | ||
This example shows how the calculated divergence of the 2-D field | ||
(0.5*x**2, -y*x) (whose divergence should be 0 everywhere) returns a 0 array | ||
|
||
>>> import numpy as np | ||
>>> X,Y=np.mgrid[0:2000,0:2000] | ||
>>> a1=0.5*X**2 | ||
>>> a2=-Y*X | ||
>>> c=[a1,a2] | ||
>>> d=np.divergence(c) | ||
>>> print d | ||
[[ 0. 0. 0. ..., 0. 0. 0.] | ||
[ 0. 0. 0. ..., 0. 0. 0.] | ||
[ 0. 0. 0. ..., 0. 0. 0.] | ||
..., | ||
[ 0. 0. 0. ..., 0. 0. 0.] | ||
[ 0. 0. 0. ..., 0. 0. 0.] | ||
[ 0. 0. 0. ..., 0. 0. 0.] | ||
""" | ||
N = [0]*len(v) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. My sense is that these pre-calculation shape checks are not necessary: you can just rely on the addition in outvals failing if the shapes do not match. But if you really feel it is necessary, all that would be required is
|
||
for i, v_c in enumerate(v): | ||
v_c = np.asanyarray(v_c) | ||
N[i] = len(v_c.shape) # Get number of dimensions from every component | ||
if False in [N[0] == N[i] for i in range(len(v))]: | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. use |
||
#Check if all components are the same size | ||
raise ValueError("Not all components of input" | ||
" have the same number of dimensions") | ||
else: | ||
if False in [np.shape(v[0]) == np.shape(v[i]) for i in range(len(v))]: | ||
raise ValueError("Not all components of input are the same size") | ||
else: | ||
N = N[0] | ||
# If all vector field components are the same shape, | ||
#N becomes the number of dimensions | ||
|
||
axes = tuple(range(N)) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Is this a mistake? Given this, why is there a need to normalize the axes below? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes, you are right, I borrowed some code from np.gradient and forgot to remove the normalization below. I will do it now, thanks. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This seems overly long as well. Doesn't the below cover it?
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. p.s. Is there any reason not just to have There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes, I was thinking along similar lines. A trick you could use to avoid repeating the first line outside the loop is to make a dummy object to add in-place: class ZeroArray(object):
def __iadd__(self, other):
return other The first time you use
Then you could write: outvals = ZeroArray()
for axis in range(len(v)):
outvals += ... There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Neat. Though I fear it fails the Zen of Python... Actually, I think python's
This definitely beats everything else in readability, but, doing a quick test, is quite a bit slower than either of our approaches (which both are faster than first making a zero-filled array for large sizes) There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Indeed. The problem with
Maybe defining def inplace_sum(items, start=0):
total = start
for item in items:
total += item
return total There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Actually, in this case we want the result to be added in-place on the first object to avoid an unnecessary memory allocation. So really, we need something like this: def inplace_sum(items):
it = iter(items)
total = next(it)
for item in it:
total += item
return total There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes, that makes sense -- I can see how for a generic |
||
outvals=np.zeros(np.shape(v_c)) #Initialize output | ||
|
||
for i,ax in enumerate(axes): | ||
outvals += np.gradient(v[ax], *varargs, axis=ax,edge_order=0) | ||
|
||
return outvals | ||
|
||
|
||
def curl(v,*varargs,**kwargs): | ||
""" | ||
Return the curl of a 3-D vector field. | ||
|
||
The curl is computed using second order accurate central differences | ||
in the interior and either first differences or second order accurate | ||
one-sides (forward or backwards) differences at the boundaries. The | ||
returned gradient hence has the same shape as the input array. | ||
|
||
Parameters | ||
---------- | ||
v : list of numpy arrays | ||
Each of these array is the N-dimensional projection | ||
of the vector field on the corresponding axis. | ||
|
||
varargs : scalar or list of scalar, optional | ||
N scalars specifying the sample distances for each dimension, | ||
i.e. `dx`, `dy`, `dz`, ... Default distance: 1. | ||
single scalar specifies sample distance for all dimensions. | ||
if `axis` is given, the number of varargs must equal the number of axes. | ||
edge_order : {1, 2}, optional | ||
Gradient is calculated using N\ :sup:`th` order accurate differences | ||
at the boundaries. Default: 1. | ||
|
||
axis : None or int or tuple of ints, optional | ||
Gradient is calculated only along the given axis or axes | ||
The default (axis = None) is to calculate the gradient for | ||
all the axes of the input array. | ||
axis may be negative, in which case it counts from the last to the first axis. | ||
|
||
Returns | ||
------- | ||
curl : list of numpy arrays | ||
The output corresponds to the curl of the input 3-D vector field. | ||
This means that the output has the form | ||
(dAz/dy-dAy/dz, dAx/dz-dAz/dx, dAy/dx-dAx/dy) of an input vector | ||
field A = (Ax, Ay, Ax). | ||
|
||
Example | ||
------- | ||
The following example shows in matplotlib how the vector field (0,0,x*y) has the | ||
correct value, nonzero only in the third component of the resulting vector field | ||
after applying curl | ||
|
||
>>> import matplotlib.pylab as plt | ||
>>> import numpy as np | ||
>>> X,Y,Z=np.mgrid[0:200,0:200,0:200] | ||
>>> a0=X*Y | ||
>>> a1=np.zeros(np.shape(X)) | ||
>>> a2=np.zeros(np.shape(X)) | ||
>>> a=[a0,a1,a2] | ||
>>> [cx,cy,cz]=np.curl(a) | ||
>>> plt.imshow(cx[:,:,1]) | ||
>>> plt.colorbar() | ||
>>> plt.show() | ||
>>> plt.imshow(cy[:,:,1]) | ||
>>> plt.colorbar() | ||
>>> plt.show() | ||
>>> plt.imshow(cz[:,:,1]) | ||
>>> plt.colorbar() | ||
>>> plt.show() | ||
""" | ||
N = [0]*len(v) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. See comment on divergence above. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. ps. here of course you do need to check that There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Do you mean len(v) == 3 in divergence? In principle I was intending to create a "generalized" divergence, for any number of dimensions, following the example of np.gradient, that can be applied to N-dimensional arrays, and the "return vector" would have N components. Please, correct me if I am wrong. However, for the case of curl, three dimensions are needed, not more nor fewer. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. No, for divergence I felt you either should omit all sanity checks, since if the shapes did not match the calculation of the divergence would fail anyway, or just do the very simple one I gave, since that covers what you had anyway. |
||
for i, v_c in enumerate(v): | ||
v_c = np.asanyarray(v_c) | ||
N[i] = len(v_c.shape) | ||
# Extract the number of dimensions from every component v_c | ||
if False in [N[i] == 3 for i in range(len(v))]: | ||
#Check if all components are the same size | ||
raise ValueError("Not all components of input vector field" | ||
" have three dimensions") | ||
else: | ||
if False in [np.shape(v[0]) == np.shape(v[i]) for i in range(len(v))]: | ||
raise ValueError("Not all components of input are the same size") | ||
else: | ||
N = N[0] | ||
# If all vector field components are the same shape, | ||
#N becomes the number of dimensions | ||
|
||
outvals=[np.zeros(np.shape(v_c))] * 3 #Initialize output vector field | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This can also be shortened substantially
(Note that I don't call There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Hello everyone, |
||
|
||
lst_axes = [1,2,2,0,0,1] # ordered list of dimension of the derivatives | ||
lst_args = [2,1,0,2,1,0] # ordered list of arguments for the derivatives | ||
|
||
for i in range(6): | ||
# select the appropriate derivative and argument for each of the 6 | ||
# required computations | ||
ax = lst_axes[i] | ||
arg = lst_args[i] | ||
|
||
out = np.gradient(v[arg], *varargs, axis=ax, edge_order=0) | ||
|
||
if i == 0: | ||
out0 = out | ||
elif i == 1: | ||
out1 = out | ||
elif i == 2: | ||
out2 = out | ||
elif i == 3: | ||
out3 = out | ||
elif i == 4: | ||
out4 = out | ||
else: | ||
out5 = out | ||
|
||
outvals[0] = out0 - out1 | ||
outvals[1] = out2 - out3 | ||
outvals[2] = out4 - out5 | ||
return outvals | ||
|
||
|
||
def laplace(f): | ||
""" | ||
Return the laplacian of input. | ||
|
||
Computes the laplacian of an N-dimensional numpy array (scalar | ||
field), or of a list of N N-dimensional numpy arrays (vector | ||
field) as the composition of the numpy functions gradient and | ||
divergence. In the case of an input vector field, the result | ||
is the vector laplacian, in which a list of the laplacian of | ||
each component is returned. | ||
For more information on the computations, type help(gradient) or | ||
help(divergence) | ||
|
||
Parameters | ||
---------- | ||
f : array_like | ||
Input array | ||
|
||
Returns | ||
------- | ||
l : array_like | ||
Laplacian of input. If the input is a scalar field f | ||
(single numpy array) the output is (d^2(f)/dx_0^2 + | ||
d^2(f)/dx_1^2 + d^2(f)/dx_2^2 + ...), where d^2(f)/dx_i^2 | ||
refers to the second derivative of f with respect to x_i. | ||
If the input is a scalar field the output is a list of | ||
numpy arrays, being each of them the laplacian operator | ||
applied to each of the input vector field components f_i | ||
|
||
See Also | ||
-------- | ||
gradient, divergence | ||
|
||
Example | ||
------- | ||
This example returns an array that grows linearly in the X axis | ||
after applying the laplacian function to the array X**3+Y**2+Z**2: | ||
|
||
>>> X,Y,Z=np.mgrid[0:200,0:200,0:200] | ||
>>> f=X**3+Y**2+Z**2 | ||
>>> d=laplace(f) | ||
>>> plt.imshow(d[:,:,1]) | ||
>>> plt.colorbar() | ||
>>> plt.show() | ||
""" | ||
if type(f) == np.ndarray: | ||
l = np.divergence(np.gradient(f, *varargs, edge_order=0)) | ||
elif type(f) == list: | ||
if False in [np.shape(f[0]) == np.shape(f[i]) for i in range(len(f))]: | ||
raise TypeError("All components of the input vector field " | ||
"must be the same shape") | ||
else: | ||
l = [] | ||
for i in range(len(f)): | ||
laplace_comp = np.divergence(np.gradient(f[i], *varargs, edge_order=0)) | ||
l.append(laplace_comp) | ||
else: | ||
raise TypeError("Please, enter a numpy array or a list of" | ||
" numpy arrays of the same shape") | ||
return l | ||
|
||
|
||
def diff(a, n=1, axis=-1): | ||
""" | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some more general comments -- did not make up my mind about everything, so some of it is more for discussion:
axis=None
to pick which derivatives to calculate or some such; that would also in some sense allow the stacked matrix kind of logic)There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hello seberg,
You are right, it makes more sense to call gradient as needed. I thought of this but for some reason I also thought that each function should be "stand alone" (which I didn't do in laplace!). It will certainly be easier to understand (for me too!) in this way. I will work on keeping what is new, and have the rest substituted for the appropriate calls to gradient. Yes, all the varargs are uncofortable to me too, In this way (calling gradient), this "freedom" will be removed when calling divergence, curl and laplacian, since they not make much sense for these functions.
Thanks for your suggestions