Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rounding error in ccw function causes mistakes in edge intersection evaluation #19

Closed
eshira opened this issue Jun 1, 2017 · 4 comments
Labels

Comments

@eshira
Copy link

eshira commented Jun 1, 2017

In the ccw function defined in visible_vertices.py, the term area = (B.x - A.x) * (C.y - A.y) - (B.y - A.y) * (C.x - A.x) is evaluated. Rounding error caused by Python number representation when computing the term can mean that the function evaluates colinear points as noncolinear. This can cause lots of problems later on (evaluated many edges incorrectly in my particular application's visibility graph).

I am using Python 2.7 so I cannot make use of math.isclose(), so I defined an isclose method in the file and then used that to check if the area as computed by ccw was close to zero with an absolute tolerance of 0.0000000000000001.

@TaipanRex TaipanRex added the bug label Jun 2, 2017
@eshira
Copy link
Author

eshira commented Jun 3, 2017

I chose too small a tolerance value.

I hunted down a bug which boiled down to mis-ordering of edges in the open_edges list because in the definition of the less than operator for the EdgeKey, the edge_intersect was returning false in the case pictured below.

probl1

The red line is the connector between the point for which the visible vertices are being counted, and the cyan endpoint is the current point being evaluated for visibility. This red line is eventually incorrectly evaluated as visible.

The magenta line is the scan line. The cyan edge is mistakenly considered not intersecting with the magenta line because of rounding error. The edges 2 and 3 are not ordered correctly (both are considered less than the other), which later causes a failure to delete edge 2 from the open_edges because the index for it is incorrect. That means that later, this edge persists in the list of open_edges as the open_edges[0] edge, which is not the right choice of edge to evaluate when we get to evaluating the cyan point.

In the end just changing the abs_tol to 10e-15 (from 10e-16) was sufficient to solve this problem. I need to read up on representation error and justify a better choice for the abs_tol.

@TaipanRex
Copy link
Owner

From Kitzinger's thesis: "Another issue about collinearity is that sometimes it is difficult to exactly specify the coordinates of points that are supposed to be collinear to each other. To help with this, all of the implementations described in this paper allow the user to specify a tolerance (given by some small epsilon) in which the angles or slopes may deviate to still be considered collinear. This also allows slope and angle calculations within the implementations to lose some precision (which is inevitable with IEEE floating point representation)."

I never implemented this, nice that you found a solution.

Note that in angle2, I round to 5 decimals. angle2 is used in the ordering of edges, so maybe that is having an effect.

@TaipanRex
Copy link
Owner

@eshira

In the end just changing the abs_tol to 10e-15 (from 10e-16) was sufficient to solve this problem. I need to read up on representation error and justify a better choice for the abs_tol.

so setting a tolerance in ccw() was enough or did you have to make changes to the EdgeKey class, i.e. set a tolerance somewhere else as well?

@TaipanRex
Copy link
Owner

@eshira this and this is a quick overview of the representation error. You can use the Decimal class to see what the actual representation of a float is. For example:

from decimal import Decimal
Decimal(1.1)
>> Decimal('1.100000000000000088817841970012523233890533447265625')

The representation error seems to consistently happen at the 18th significant digit. The Wikipedia article states that the floating point representation that Python uses will have 15-17 decimal digit precision. So when you start adding or multiplying numbers together, the errors starting from the 18th significant digit start adding up:

Decimal(1.1*1.1)
>>Decimal('1.2100000000000001865174681370262987911701202392578125')

Now the error is occurring from the 17th significant digit.

So the correct tolerance would depend on your coordinate system. For my usecase, I have numbers with up to three digits before the decimal point, which gives me less precision after the decimal point. a tolerance of 15 has been working fine and makes sense.

Maybe there is a better way using Numpy or something, I will have to look into that.

TaipanRex added a commit that referenced this issue Jul 4, 2017
calculations directly. This saves 1 square root operation and
3 squared operations. This also reduces the floating point
representation error, and no longer need to round to 5.

Might be fringe cases where a tolerance is needed, I am not
sure.

See also: #19
TaipanRex added a commit that referenced this issue Apr 14, 2018
See also: #28, #19, e9eeeaa

Co-authored-by: Pierre de Beaucorps <pierre.de-beaucorps@inria.fr>
TaipanRex added a commit that referenced this issue Apr 15, 2018
TaipanRex added a commit that referenced this issue May 30, 2018
Still seeing some errors so reduced the COLIN_TOLERANCE from 13 to 10.
This solves the specific example given in #33 and I also mentioned
in #28 that I would reduce the tolerance to better avoid these types
of errors.

@tjansson60 has a suggested solution that does not involve truncating
the cos_value, I need to performance test this first.

See: #33, #28, #19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants