-
-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP: Use a n-dimensional Bresenham algorithm in draw_nd #4277
base: main
Are you sure you want to change the base?
Conversation
Hello @marcotama! Thanks for updating this PR. We checked the lines you've touched for PEP 8 issues, and found:
Comment last updated at 2019-11-27 13:21:27 UTC |
Hey @marcotama! This is great! @stefanv was not super happy that we didn't get this right in #2043. There's a bit of cleanup to be done before merging, but in principle we totally want this in! I'm sorry that I don't have time to do a full review right now (I'm not actually familiar with the Bresenham algorithm), but the way forward would be to replace the original In other news, I notice you're in Melbourne??? We should meet up and finish this up in person! =D You can email me at juan.nunez-iglesias@monash.edu |
I implemented a Cython version of the same algorithm, but I am not an expert, and I am pretty sure it can be optimized further. It's obviously faster than the original one, but a fair comparison would have Cython version of both. Here are the results of a simple test; as expected, the @jni I sent you an email :) |
@marcotama awesome! Can you produce the same plots for just 3D/4D and line length of 16, 32, ..., 16384? I think that is a much more common use case than 100-D data. 😂 |
(ignore the lines, they are linear interpolations) |
@marcotama I pushed our changes from today to your branch. @scikit-image/core this is still WIP and not ready for review... Some of our adaptations of the algorithm to floating point values are not working correctly. |
…gy in a system by sometimes rounding up and sometimes down, but in the case of graphics it results in jitter if a line shifts by 1px
@jni Can you have a look at the current status? I believed I fixed the math, and the results are now almost identical to the existing implementation. (Cleanup remains a todo) I say "almost" because there are differences due to a couple of reasons:
Obviously this needs to be double and triple checked ;) |
The code I used is: from skimage.draw.draw_nd import line_nd, my_line_nd
import matplotlib.pyplot as plt
from matplotlib.collections import PatchCollection
from matplotlib.patches import Rectangle
tests = [
((0, 0), (2, 2)), # m=1 (x0 is even)
((1, 1), (3, 3)), # m=1 (x0 is odd)
((0, 0), (4, 2)), # m=2 (x0 is even)
((1, 1), (5, 3)), # m=2 (x0 is odd)
((1, 1), (3, 5)), # x is the second coordinate
((1, 1), (5, 2.5)), # y1 is decimal
((1.1, 1.4), (5.3, 2.2)), # all decimals (this produces a bug - a repeated point in the original implementation, due to the call to np.ceil (maybe a -0.5 is due in there))
((0, 0), (5, -3)), # m<0
((0, 0), (3, -5)), # m<0, x is the second coordinate
((1, 1), (5, 2.5)), # 3 dimensions
((1, 1), (5, -2.5)), # 3 dimensions, m>0 and m<0
((1, 0), (5, -1.5)), # 3 dimensions, m>0 and m<0, shifting causes different shape in the old implementation (compared to previous test)
((1, 1), (5, -2.4)), # just need to tease out why the results are what they are in this and the next example
((1, 1), (5, -2.6))
]
def plot_pixels(ax, pixels, line):
facecolor='k'
edgecolor='k'
alpha=0.5
x0, y0, x1, y1 = line
ax.grid()
ax.axes.set_aspect('equal', 'datalim')
squares = []
xs, ys = pixels
for x, y in zip(xs, ys):
square = Rectangle((x-.5, y-.5), 1, 1)
squares.append(square)
pc = PatchCollection(squares, facecolor=facecolor, alpha=alpha, edgecolor=edgecolor)
ax.add_collection(pc)
if abs(x1-x0) > abs(y1-y0):
ax.set_xlim(min(xs) - 1, max(xs) + 1)
else:
ax.set_ylim(min(ys) - 1, max(ys) + 1)
ax.plot([x0, x1], [y0, y1], color='k')
for start, stop in tests:
fig, (ax1, ax2) = plt.subplots(nrows=1, ncols=2, figsize=(10,5))
fig.suptitle(f"{start} to {stop}")
line = start[0], start[1], stop[0], stop[1]
ax1.set_title("Original")
pixels1 = line_nd(start, stop, endpoint=True)
plot_pixels(ax1, pixels1, line)
ax2.set_title("New")
pixels2 = my_line_nd(start, stop, endpoint=True)
plot_pixels(ax2, pixels2, line)
fig.savefig(f"{start} to {stop}.png") |
@marcotama can you explain
? That is clearly two dimensions. =D Thanks for the code and images, though, this is super thorough and easy to introspect! I love it! Gonna have a proper look in the next few days. |
@marcotama looking at the examples again, this kinda hits the nail on the head for me:
Is this caused by the choice of rounding or something else? It makes me tic a little. =P
See above. I'd like to understand why the error is higher in the new version, as I don't really see why that should be the case. But, other than that, I love these diagnostics and am convinced that this is a better algorithm for this purpose. Thanks for the work so far @marcotama! It's great! I don't have time to review the code thoroughly right now but I hope to be able to do so later this week. I will say that eventually we want your code to completely replace my existing code, but happy to keep it there while we keep diagnosing this. |
It would be fun to do the animation that you hinted at above; I wonder if it will show interesting artifacts. It seems like the old algorithm has a tendency to be more "symmetric", which intuitively feels the way it should be; any intuition why? I'm also surprised that the error is different between the two—what could cause that? |
Description
This is a possible enhancement of the
draw_nd
function that uses the Bresenham algorithm (extended to n dimensions).I suppose its strongest advantage is that it is unaffected by floating point error.
The current proposed version is vectorized in the number of dimensions, while a Python loop iterates over the various points. I suppose the original implementation is faster in most use cases although I have not run any tests.
The current algorithm is a WIP because it works exclusively on integers (as did the original Bresenham. Ideas for how to handle floats are welcome (I am currently truncating them).
My idea I have is to use the decimal value to initialize the errors, but to do that I think I need to find the least common denominator of said decimals after converting them to fractions, which has two problems:
a) it only works with fractional values (ok, on a computer no really real numbers exist, but then the LCD could have a huge denominator resulting in huge initial errors (think plotting from
(0,0)
to(cos(30°), sin(30°))
, andb) computing the LCD sounds like extra work one might not want to do (but I guess we have the
integer
flag for that?).Checklist
./doc/examples
(new features only)./benchmarks
, if your changes aren't covered by anexisting benchmark
For reviewers
later.
__init__.py
.doc/release/release_dev.rst
.@meeseeksdev backport to v0.14.x