Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: Use a n-dimensional Bresenham algorithm in draw_nd #4277

Open
wants to merge 8 commits into
base: main
Choose a base branch
from

Conversation

marcotama
Copy link

@marcotama marcotama commented Nov 3, 2019

Description

This is a possible enhancement of the draw_nd function that uses the Bresenham algorithm (extended to n dimensions).
I suppose its strongest advantage is that it is unaffected by floating point error.

The current proposed version is vectorized in the number of dimensions, while a Python loop iterates over the various points. I suppose the original implementation is faster in most use cases although I have not run any tests.

The current algorithm is a WIP because it works exclusively on integers (as did the original Bresenham. Ideas for how to handle floats are welcome (I am currently truncating them).

My idea I have is to use the decimal value to initialize the errors, but to do that I think I need to find the least common denominator of said decimals after converting them to fractions, which has two problems:
a) it only works with fractional values (ok, on a computer no really real numbers exist, but then the LCD could have a huge denominator resulting in huge initial errors (think plotting from (0,0) to (cos(30°), sin(30°)), and
b) computing the LCD sounds like extra work one might not want to do (but I guess we have the integer flag for that?).

Checklist

For reviewers

  • Check that the PR title is short, concise, and will make sense 1 year
    later.
  • Check that new functions are imported in corresponding __init__.py.
  • Check that new features, API changes, and deprecations are mentioned in
    doc/release/release_dev.rst.
  • Consider backporting the PR with @meeseeksdev backport to v0.14.x

@pep8speaks
Copy link

pep8speaks commented Nov 3, 2019

Hello @marcotama! Thanks for updating this PR. We checked the lines you've touched for PEP 8 issues, and found:

Line 183:1: W293 blank line contains whitespace
Line 190:80: E501 line too long (83 > 79 characters)
Line 218:22: E261 at least two spaces before inline comment
Line 219:22: E261 at least two spaces before inline comment
Line 220:22: E261 at least two spaces before inline comment
Line 221:22: E261 at least two spaces before inline comment
Line 222:22: E261 at least two spaces before inline comment
Line 223:24: E261 at least two spaces before inline comment
Line 224:30: E261 at least two spaces before inline comment
Line 224:80: E501 line too long (175 > 79 characters)
Line 225:23: E261 at least two spaces before inline comment
Line 226:23: E261 at least two spaces before inline comment
Line 227:30: E261 at least two spaces before inline comment
Line 228:31: E261 at least two spaces before inline comment
Line 229:31: E261 at least two spaces before inline comment
Line 229:80: E501 line too long (145 > 79 characters)
Line 230:31: E261 at least two spaces before inline comment
Line 230:80: E501 line too long (118 > 79 characters)
Line 239:12: W292 no newline at end of file

Comment last updated at 2019-11-27 13:21:27 UTC

@jni
Copy link
Member

jni commented Nov 8, 2019

Hey @marcotama! This is great! @stefanv was not super happy that we didn't get this right in #2043. There's a bit of cleanup to be done before merging, but in principle we totally want this in!

I'm sorry that I don't have time to do a full review right now (I'm not actually familiar with the Bresenham algorithm), but the way forward would be to replace the original draw_line_nd function with this one, once the inner loop is sped up by either vectorising or switching to Cython.

In other news, I notice you're in Melbourne??? We should meet up and finish this up in person! =D You can email me at juan.nunez-iglesias@monash.edu

@marcotama
Copy link
Author

I implemented a Cython version of the same algorithm, but I am not an expert, and I am pretty sure it can be optimized further. It's obviously faster than the original one, but a fair comparison would have Cython version of both.

Here are the results of a simple test; as expected, the linspace based algorithm outperforms Bresenham on few dimensions, in the Python form.
comparison

@jni I sent you an email :)

@jni
Copy link
Member

jni commented Nov 10, 2019

@marcotama awesome! Can you produce the same plots for just 3D/4D and line length of 16, 32, ..., 16384? I think that is a much more common use case than 100-D data. 😂

@marcotama
Copy link
Author

Obviously I wasn't thinking fourth-dimensionally! Silly me!

Here you go!

2dim
3dim
4dim

@marcotama
Copy link
Author

(ignore the lines, they are linear interpolations)

@jni
Copy link
Member

jni commented Nov 26, 2019

@marcotama I pushed our changes from today to your branch.

@scikit-image/core this is still WIP and not ready for review... Some of our adaptations of the algorithm to floating point values are not working correctly.

…gy in a system by sometimes rounding up and sometimes down, but in the case of graphics it results in jitter if a line shifts by 1px
@marcotama
Copy link
Author

@jni Can you have a look at the current status? I believed I fixed the math, and the results are now almost identical to the existing implementation. (Cleanup remains a todo)

I say "almost" because there are differences due to a couple of reasons:

  • I removed the banker's rounding (i.e. rounding to the nearest even number) for the following reason: banker's rounding was first introduced (afaik) to maintain (on average) the energy in "the system". By sometimes rounding ~.5 up and sometimes rounding it down you get that effect. However, in a graphic library I put forward the case that that should not be the policy. If that were the case, a line that passes at some ~.5 and is shifted up 1px at a time in an animation will show jitter if banker's rounding is used. This should not be the case.
  • Different algorithms have different biases for cases when the line passes exactly between two pixels (regardless whether the bias is explicit, like when using rounding, or implicit, due to e.g. >/>= in the code on negative/positive numbers).

Obviously this needs to be double and triple checked ;)

@marcotama
Copy link
Author

marcotama commented Dec 1, 2019

I have written a little notebook to generate some plots to compare the two algorithms. Here are the plots of the test cases in the code (I made the 3D to 2D).

(0, 0) to (2, 2)
(1, 1) to (3, 3)
All good here.

(0, 0) to (4, 2)
(1, 1) to (5, 3)
There are differences. This is an example where the banker's rounding creates jitter if you are to animate a line shifting upwards. The jitter is due to to inconsistencies between line shapes depending on whether the ends are closer to an even or an odd integer.

(0, 0) to (3, -5)
(0, 0) to (5, -3)
I believe these are how they are meant to be.

(1, 1) to (3, 5)
m>1 seems to also be all good.

(1, 1) to (5, -2 4)
(1, 1) to (5, -2 5)
(1, 1) to (5, -2 6)
These three show differences between the two algorithms. I aesthetically prefer the new algorithm, but the old algorithm seems to have less error (i.e. distance between the line and the center of colored pixels)

(1, 1) to (5, 2 5)
(1, 0) to (5, -1 5)
These two show that the two algorithms have a different bias for rounding. There is not such thing as "correct".

(1 1, 1 4) to (5 3, 2 2)
This last one shows that there is a bug in the original algorithm - the same pixel is output twice.

@marcotama
Copy link
Author

The code I used is:

from skimage.draw.draw_nd import line_nd, my_line_nd
import matplotlib.pyplot as plt
from matplotlib.collections import PatchCollection
from matplotlib.patches import Rectangle

tests = [
    ((0, 0), (2, 2)), # m=1 (x0 is even)
    ((1, 1), (3, 3)), # m=1 (x0 is odd)
    ((0, 0), (4, 2)), # m=2 (x0 is even)
    ((1, 1), (5, 3)), # m=2 (x0 is odd)
    ((1, 1), (3, 5)), # x is the second coordinate
    ((1, 1), (5, 2.5)), # y1 is decimal
    ((1.1, 1.4), (5.3, 2.2)), # all decimals (this produces a bug - a repeated point in the original implementation, due to the call to np.ceil (maybe a -0.5 is due in there))
    ((0, 0), (5, -3)), # m<0
    ((0, 0), (3, -5)), # m<0, x is the second coordinate
    ((1, 1), (5, 2.5)), # 3 dimensions
    ((1, 1), (5, -2.5)), # 3 dimensions, m>0 and m<0
    ((1, 0), (5, -1.5)), # 3 dimensions, m>0 and m<0, shifting causes different shape in the old implementation (compared to previous test)
    ((1, 1), (5, -2.4)), # just need to tease out why the results are what they are in this and the next example
    ((1, 1), (5, -2.6))
]

def plot_pixels(ax, pixels, line):
    facecolor='k'
    edgecolor='k'
    alpha=0.5
    x0, y0, x1, y1 = line
        
    ax.grid()
    ax.axes.set_aspect('equal', 'datalim')
    squares = []
    xs, ys = pixels
    for x, y in zip(xs, ys):
        square = Rectangle((x-.5, y-.5), 1, 1)
        squares.append(square)
    pc = PatchCollection(squares, facecolor=facecolor, alpha=alpha, edgecolor=edgecolor)
    ax.add_collection(pc)
    if abs(x1-x0) > abs(y1-y0):
        ax.set_xlim(min(xs) - 1, max(xs) + 1)
    else:
        ax.set_ylim(min(ys) - 1, max(ys) + 1)
    
    ax.plot([x0, x1], [y0, y1], color='k')
    
    

for start, stop in tests:
    fig, (ax1, ax2) = plt.subplots(nrows=1, ncols=2, figsize=(10,5))
    fig.suptitle(f"{start} to {stop}")
    line = start[0], start[1], stop[0], stop[1]
    
    ax1.set_title("Original")
    pixels1 = line_nd(start, stop, endpoint=True)
    plot_pixels(ax1, pixels1, line)
    
    ax2.set_title("New")
    pixels2 = my_line_nd(start, stop, endpoint=True)
    plot_pixels(ax2, pixels2, line)
    
    fig.savefig(f"{start} to {stop}.png")

@jni
Copy link
Member

jni commented Dec 2, 2019

@marcotama can you explain

    ((1, 1), (5, 2.5)), # 3 dimensions

? That is clearly two dimensions. =D

Thanks for the code and images, though, this is super thorough and easy to introspect! I love it! Gonna have a proper look in the next few days.

@jni
Copy link
Member

jni commented Dec 2, 2019

@marcotama looking at the examples again, this kinda hits the nail on the head for me:

I aesthetically prefer the new algorithm, but the old algorithm seems to have less error

Is this caused by the choice of rounding or something else? It makes me tic a little. =P

These two show that the two algorithms have a different bias for rounding. There is not such thing as "correct".

See above.

I'd like to understand why the error is higher in the new version, as I don't really see why that should be the case. But, other than that, I love these diagnostics and am convinced that this is a better algorithm for this purpose. Thanks for the work so far @marcotama! It's great! I don't have time to review the code thoroughly right now but I hope to be able to do so later this week.

I will say that eventually we want your code to completely replace my existing code, but happy to keep it there while we keep diagnosing this.

@stefanv
Copy link
Member

stefanv commented Dec 2, 2019

It would be fun to do the animation that you hinted at above; I wonder if it will show interesting artifacts.

It seems like the old algorithm has a tendency to be more "symmetric", which intuitively feels the way it should be; any intuition why? I'm also surprised that the error is different between the two—what could cause that?

Base automatically changed from master to main February 18, 2021 18:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants