Skip to content

Conversation

@barendgehrels
Copy link
Collaborator

Fixes: #629

This was a long standing warning where we were pinged about several times.

I think this is the best solution at the moment. However, it solves one case. There is the message:

This pattern of select_most_precise<double,...> is pretty common, and I have found the same warning being produced in other parts of boost that do the same thing when using with std::int64_t.

I cannot fix all of that now, it first have to wait at least for the big improvement in intersections.

Then we probably should fix it in select_most_precise itself.

Please give me your opinions.

If possible it should be included in 1.88

@barendgehrels barendgehrels self-assigned this Mar 15, 2025
@barendgehrels barendgehrels added this to the 1.88 milestone Mar 15, 2025
Copy link
Collaborator

@tinko92 tinko92 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me, I commented two things which are both minor. Given that you want it in 1.88 and it is a long-standing bug, I'm in favour of merging regardless of the resolution of my comments, so I selected "Approve".

@tinko92
Copy link
Collaborator

tinko92 commented Mar 16, 2025

One more thought on this: I can see this change affects the coordinate type in the context of side-by-triangle, which evaluates a second-degree polynomial. The integer promotion promotes to an integral type with twice the digits, which will make this evaluate without integer overflow if the coordinate_type were e.g. an int32_t and the input geometry fully uses the range, the predicate would be evaluated in int64_t without overflow. For int64_t without int128 or a multiprecision type, this is not possible.

I think the choice of giving floating-poitn a priority above integer in select_most_precise and this use in side_by_triangle has some merit.

Converting coordinates that use the full range of int64_t to double will lose 10 bits of precision, which is fair to warn about from the compiler, and may produce an incorrect result if 1) they use close to that full range of the type actually if they are larger than ~2^26, that may already lead to problems and 2) they are nearly collinear.

Evaluating a second degree polynomial in side by triangle in int64_t if the coordinates use close to the full range of the type will likely overflow and may also produce the wrong result but now silently without a warning and even if the points are not near-collinear, with undefined behaviour (signed integer overflow) and with AFAIK no standard way to detect that it happened. It will guarantee correctness for numbers with magnitude roughly in the range 2^26 - 2^31, though, unlike using double.

So this change, exchanges a valid warning, that is only relevant if the int64_t coordinates are some rather large numbers (>2^26), to silently behaving worse for somewhat larger numbers (>2^31) and otherwise equivalent. Even though, the most common cases may be users with much smaller coordinates who are warned needlessly.

Edit: I guess, this can also be addressed later. It shouldn't be too troubleseome to catch that case by testing the coordinates size in advance and calling a slower fall-back side by triangle for the special case of both int64_t and very large coordinates, using double-word multiplication. Having this extra handling to avoid overflow when promotion to double width is not possible, would allow to both avoid the warning and having good behaviour without integer overflow. Still in favor of merging.

Copy link
Member

@vissarion vissarion left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also agree with merging.

@vissarion vissarion merged commit 51a919d into boostorg:develop Mar 18, 2025
23 checks passed
@barendgehrels
Copy link
Collaborator Author

One more thought on this

I think the choice of giving floating-poitn a priority above integer in select_most_precise and this use in side_by_triangle has some merit.

Yes, going to double had some merit indeed.

But I still think we should not have done it. Originally we didn’t. I don’t remember when it was changed - but the “integer” world is to be fully safe, without having FP issues in the calculations. Therefore the side should be stable, and operator on input=integer, output=integer. Same for other calculations. The intersections can fall off the grid, of course, but they are rounded.

So integer should stay integer, as much as possible.

And then you are right, whenever we multiply integers, we need to have the space for it. That was also discussed, long ago, with the “customer” (in this case: @vschoech). They (from Thinkcell) are aware to not use the full range of int64_t. But we should indeed document it properly. That either only part of a range can be used, or promotion should be possible and successful (for example, we could promote to Boost.Multiprecision using that define).

@barendgehrels barendgehrels deleted the fix/issue_629 branch March 18, 2025 17:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

MSVC Warning: C4244 'initializing': conversion from 'CoordinateType' to 'const PromotedType', possible loss of data

3 participants