New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Edge treatment in map_coordinates #4075
Comments
This is something that can be really good to have. This is especially true when one is working with underlying fields which are supposed to extend beyond the measurement domain. |
That would be great!, it would make interpolation along the edges more stable (i.e. small numerical differences in the interpolation coordinates produce small numerical differences in the interpolation result). Right now, if for some reason we attempt to interpolate, say, at [-epsilon, -epsilon] the result would be 0, but interpolating at [0, 0] it would be A[0, 0] which may be quite large. The 'epsilon' above could have been the result of a computation made with extended precision (80 bit floating point variables), while the 0 could have been the result of the exact same computation made with double precision (64 bit floating point variables). This may result in very large differences by running exactly the same code under two different architectures. I think this proposal would decrease this effect (which we are actually experiencing right now). To make it more general, the new mode could accept a constant parameter (just like the constant mode) and assume extra rows/columns having that constant value instead of always zero. That way, beyond the first extra row/column, the result would be the same as the constant mode, but more stable. |
@omarocegueda - thanks for your comments! Is there a reason why we couldn't just use |
Oh! of course!, sorry, for some reason I missed the last part of your comment!, yes, I think what you suggest would be great. =) |
So having looked into it a bit, there's the easy way and the hard way ;) The easy way is to simply do something like:
at the Python level (the above should work). But of course this involves making a copy of the data so is not good for large arrays. The proper way to implement this (which is the hard way) is to dig into the C code in |
Consider the following example:
This is an array which looks like:
Now we interpolate between the four top left pixels:
The result makes sense because the value at (0.5,0.5) is the bilinear interpolation of an array with:
Now let's take an array that is all ones:
and we now interpolate above the top left pixel:
at the moment, this sets the result to 0 since the point is outside the boundary, but I was wondering whether it would be possible to implement a new mode where it basically will return the result that it would have returned if the data had had an extra row and column of zeros just outside the ones, so it would give 0.14 as in the example shown above? This mode would be almost identical to the constant mode, but would give a different result when the point lies at most one row or one column away from the image (further than this it will just return
cval
anyway.The text was updated successfully, but these errors were encountered: