-
-
Notifications
You must be signed in to change notification settings - Fork 7.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
alternate fix for issue #997 #1477
Conversation
ref_interpolator = Triangulation([0,10,10,0], | ||
[0,0,10,10]).linear_interpolator([1,10,5,2.0]) | ||
|
||
def equal_arrays(a1,a2, tolerance=1e-10): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Were you aware of numpy.testing? In particular the numpy.testing.assert_array_almost_equal
function.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nope, but always happy to learn.
help(assert_array_almost_equal) referred me to np.testing.assert_allclose, which in turn referenced np.allclose(). A useful np shortcut I was not aware of. Thanks 👍
(p.s. It seems that generalizations added by assert_allclose, like handling inf/nan values, are redundant in this case, and probably not worth the complication of extra import and runtime, so I use plain allclose() ).
From a naive point of view, I prefer this change over #998. @ianthomas23 & others, would you mind reviewing and commenting whether this PR is preferable (if so, please close the old PR)? Good work @AmitAronovitch ! Cheers, |
dx = (x1 - x0) / (xsteps-1); | ||
dy = (y1 - y0) / (ysteps-1); | ||
dx = ( x1==x0 ? 0 : (x1 - x0) / (xsteps-1) ); | ||
dy = ( y1==y0 ? 0 : (y1 - y0) / (ysteps-1) ); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps I am missing something here, but if x1==x0, then wouldn't the operation return zero anyway (unless xsteps==1)? If all we are doing is preventing division by zero errors, then wouldn't we rather want to test for xsteps==1?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with @WeatherGod here, xsteps==1 (and similar for y) is better here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, as a bit of sanity check, is it ever possible for x/ysteps to be zero?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Argument validation for the grid specification arguments could be done on the python side. Thats basically in delaunay.interpolate.LinearInterpolator.getitem (but we should grep for other possible references).
Currently no validation is done. If we want to add them, we should think about the possible semantics of various cases.
To @WeatherGod (2): 0 for x/ysteps would just return an empty grid. If you put a negative value, it would fail before reaching the loop, because numpy raises an exception if you try to allocate an array with negative dimenson.
To @WeatherGod (1), @ianthomas23 : To be exact, I was checking for 0/0 (which is nan) rather than the general */0 (which would be "inf" unless the * is also 0).
The case with xsteps==1 and x0!=x1 may have valid use-cases, but it is not clear where the single point should be. Setting dx=0 in this case is equivalent to putting it on x0. We could, for example place it at 0.5*(x0+x1), or at x1. The stable code would fill the array with inf, which may be considered as some kind of error-indication.
On the other hand, I had actually used the case (x0==x1, xsteps>1) before, and from my point of view I was merely "extending its range of validity" to xsteps==0, without affecting the case described above.
However: I tend to think that having the "*/0" case produce x0 is actually better than the current "inf", and I do agree that it makes the code somewhat more readable.
I'll accept your suggestion - just making sure you understand the implications.
btw: the other fix (#998) has the advantage that the edge cases are naturally resolved, and you do not have to think about all these cases (I guess it would also produce x0 rather than inf, but I did not check that).
@pelson: I also prefer this over #998 so I am closing that PR. I can see us having to make major changes to the delaunay code in the long term to improve its robustness. A complete rewrite may be the best solution. Hence I am not particulary bothered about the other testing issues highlighted in #998. @AmitAronovitch: if you can address the various line comments in this PR then I would happily commit this PR without any further widening of scope. |
@ianthomas23: For the future code: I think we should probably use some external library (possibly qhull as scipy does) rather than maintain this here - I'll post to the mailing list later on. I'll patch the c code in a minute (but please read my long comment above). Other stuff has already been patched, except the comment about class-methods vs than module-functions (line 189), which did not seem right to me (please reply there if you think otherwise). @pelson, @WeatherGod ,@ianthomas23 :Thanks for the review |
I agree with this. I recommend Jonathan Shewchuk's triangle library. |
How can I make the auto-build try again? Should I make a dummy commit on this branch, or is there another way to make it try again? |
@AmitAronovitch Not sure. Are you running the tests with Python 3.x? |
@dmcdougall no I did not. Travis had reported success on all versions (including 3.x) except 2.6. My default Python is 2.7. |
@AmitAronovitch No need to add a dummy commit, @mdboom worked some travis magic (7dd57d2). If you rebase this branch against current master that should trigger a travis re-build. |
@dmcdougall - seems like that patch would only affects python 3.x (so probably unrelated to current problem). Rebasing would certainly trigger a re-build, but would probably require a whole new PR (which would make the conversation above less accessible for reviewers). Still, I don't mind doing that if it makes it easier to merge this code. |
@WeatherGod, @ianthomas23 : I repeat my previous comment (which was hidden in "Outdated Diff" above ) - since I see now that it is hard to follow the way it was phrased above (it was intermixed with other comments): You had suggested that in my patch I should check for xsteps==1 rather than x1==x0. |
@AmitAronovitch Oh, I'm sorry. There's been recent problem with numpy on Python 3.x, and I assumed this was one of them. That's my fault for not checking properly. Yes, the python26 build failure was a dud, but it passed with python27. It's probably fine, but if you want I can check the tests on my local machine with 2.6 if it'd make you feel better. |
I think I am ok with such a change. |
I'm OK with this PR, and it has been hanging around for too long for such a small change so I will merge it. I'm taking @dmcdougall's word that the Travis failure isn't really a problem. |
alternate fix for issue #997
The fix proposed in #998 changes the looping method, and introduces (tiny) differences in accuracy. As opposed, this PR keeps the loop intact, and fixes the issue using a special-purpose if statement.