Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Edge treatment in map_coordinates #4075

Open
astrofrog opened this issue Oct 15, 2014 · 5 comments
Open

Edge treatment in map_coordinates #4075

astrofrog opened this issue Oct 15, 2014 · 5 comments
Labels
enhancement A new feature or improvement scipy.ndimage

Comments

@astrofrog
Copy link
Contributor

Consider the following example:

In [1]: import numpy as np

In [2]: from scipy.ndimage import map_coordinates

In [3]: image = np.zeros((4,4))

In [4]: image[1:3,1:3] = 1

This is an array which looks like:

0 0 0 0
0 1 1 0
0 1 1 0
0 0 0 0

Now we interpolate between the four top left pixels:

In [5]: map_coordinates(image, np.array([[0.5],[0.5]]))
Out[5]: array([ 0.140625])

The result makes sense because the value at (0.5,0.5) is the bilinear interpolation of an array with:

0 0
0 1

Now let's take an array that is all ones:

In [6]: image = np.ones((4,4))

and we now interpolate above the top left pixel:

In [7]: map_coordinates(image, np.array([[-0.5],[-0.5]]), mode='constant', cval=0)
Out[7]: array([ 0.])

at the moment, this sets the result to 0 since the point is outside the boundary, but I was wondering whether it would be possible to implement a new mode where it basically will return the result that it would have returned if the data had had an extra row and column of zeros just outside the ones, so it would give 0.14 as in the example shown above? This mode would be almost identical to the constant mode, but would give a different result when the point lies at most one row or one column away from the image (further than this it will just return cval anyway.

@dagap
Copy link

dagap commented Nov 10, 2014

This is something that can be really good to have. This is especially true when one is working with underlying fields which are supposed to extend beyond the measurement domain.

@omarocegueda
Copy link

That would be great!, it would make interpolation along the edges more stable (i.e. small numerical differences in the interpolation coordinates produce small numerical differences in the interpolation result). Right now, if for some reason we attempt to interpolate, say, at [-epsilon, -epsilon] the result would be 0, but interpolating at [0, 0] it would be A[0, 0] which may be quite large. The 'epsilon' above could have been the result of a computation made with extended precision (80 bit floating point variables), while the 0 could have been the result of the exact same computation made with double precision (64 bit floating point variables). This may result in very large differences by running exactly the same code under two different architectures. I think this proposal would decrease this effect (which we are actually experiencing right now).

To make it more general, the new mode could accept a constant parameter (just like the constant mode) and assume extra rows/columns having that constant value instead of always zero. That way, beyond the first extra row/column, the result would be the same as the constant mode, but more stable.

@astrofrog
Copy link
Contributor Author

@omarocegueda - thanks for your comments! Is there a reason why we couldn't just use cval itself for that and just have another flag which switches the behavior between the current implementation and what I am suggesting above?

@omarocegueda
Copy link

Oh! of course!, sorry, for some reason I missed the last part of your comment!, yes, I think what you suggest would be great. =)

@astrofrog
Copy link
Contributor Author

So having looked into it a bit, there's the easy way and the hard way ;) The easy way is to simply do something like:

def map_coordinates_new(image, coords, mode=None, cval=None):
    image = np.pad(image, 1, mode='constant', constant_values=cval)
    return map_coordinates(image, coords + 1, mode=mode, cval=cval)

at the Python level (the above should work). But of course this involves making a copy of the data so is not good for large arrays. The proper way to implement this (which is the hard way) is to dig into the C code in scipy.ndimage and implement this. For my purposes, a simple fix like the function above is fine, but I personally think it would be worthwhile if someone wanted to implement this kind of mode properly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement A new feature or improvement scipy.ndimage
Projects
None yet
Development

No branches or pull requests

4 participants