New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Integer overflow #560

Merged
merged 18 commits into from Feb 6, 2015

Conversation

Projects
None yet
@nikomatsakis
Contributor

nikomatsakis commented Jan 7, 2015

@thestinger

This comment has been minimized.

Show comment
Hide comment
@thestinger

thestinger Jan 7, 2015

If it's ever going to use hardware overflow instructions and cause an error only on-demand, then it can't panic or report precise errors. Having it use debug_assert! means it's going to add a huge amount of overhead to unoptimized builds regardless of any future optimizations. Anyway, it will never be free because operations with potential side effects are clearly going to block code motion optimizations in many cases where pure ones would not.

thestinger commented Jan 7, 2015

If it's ever going to use hardware overflow instructions and cause an error only on-demand, then it can't panic or report precise errors. Having it use debug_assert! means it's going to add a huge amount of overhead to unoptimized builds regardless of any future optimizations. Anyway, it will never be free because operations with potential side effects are clearly going to block code motion optimizations in many cases where pure ones would not.

@thestinger

This comment has been minimized.

Show comment
Hide comment
@thestinger

thestinger Jan 7, 2015

Slowing down unoptimized Rust even further is certainly relevant. Optimized builds take a long time to compile and there is absolutely no support for debugging optimized code with LLVM. It's inappropriate to claim that the performance hit from this isn't relevant.

thestinger commented Jan 7, 2015

Slowing down unoptimized Rust even further is certainly relevant. Optimized builds take a long time to compile and there is absolutely no support for debugging optimized code with LLVM. It's inappropriate to claim that the performance hit from this isn't relevant.

@bill-myers

This comment has been minimized.

Show comment
Hide comment
@bill-myers

bill-myers Jan 8, 2015

This is an improvement of the current state (not sure if it's the best overall since solutions that don't fail might be better, but much better that silently overflowing).

I think the "wrapping"/modular types should be separate types (which could be called "m8", "m16", etc.), just like signed and unsigned integers are separate types.

This makes sense since they are mathematically different objects, with s32/u32 representing subsets of the infinite ring of integers ("Z"), while the "wrapping" types represent the finite rings of integers modulo 2^k ("Z/2^kZ").

In practice, this means that things like ordered comparisons or integer-like division make no sense on modular types (if needed, the programmer would have to cast to either the corresponding unsigned or signed type, which of course will in general yield different results)

[I'd also be tempted to remove the arithmetic operations from the current non-modular types since they aren't closed under them, but I guess that might be too extreme (the user would be forced to choose between manually calling "checked_add", cast to the modular types and back, or cast to bigint depending on what they want).]

bill-myers commented Jan 8, 2015

This is an improvement of the current state (not sure if it's the best overall since solutions that don't fail might be better, but much better that silently overflowing).

I think the "wrapping"/modular types should be separate types (which could be called "m8", "m16", etc.), just like signed and unsigned integers are separate types.

This makes sense since they are mathematically different objects, with s32/u32 representing subsets of the infinite ring of integers ("Z"), while the "wrapping" types represent the finite rings of integers modulo 2^k ("Z/2^kZ").

In practice, this means that things like ordered comparisons or integer-like division make no sense on modular types (if needed, the programmer would have to cast to either the corresponding unsigned or signed type, which of course will in general yield different results)

[I'd also be tempted to remove the arithmetic operations from the current non-modular types since they aren't closed under them, but I guess that might be too extreme (the user would be forced to choose between manually calling "checked_add", cast to the modular types and back, or cast to bigint depending on what they want).]

@glaebhoerl

This comment has been minimized.

Show comment
Hide comment
@glaebhoerl

glaebhoerl Jan 8, 2015

Contributor

@bill-myers Thanks for explaining this more thoroughly. What would the difference be between m32 and u32 using wrapping operations (i.e. the current u32)?

(Of course I'm +1 on the RFC as it is.)

@thestinger I agree that performance and not foreclosing on imprecise exceptions are important. But it's not clear to me that the present RFC would create any constraints in this area. I see this text:

However, when running in debug mode (in other words, whenever a debug_assert! would be compiled in) they are required to emit code to perform dynamic checks on all arithmetic operations that may potentially overflow.

I interpret this to mean "whenever debug_assert!s are enabled, overflow must also be checked (somehow)", as opposed to "overflow must be checked using debug_assert!s".

Contributor

glaebhoerl commented Jan 8, 2015

@bill-myers Thanks for explaining this more thoroughly. What would the difference be between m32 and u32 using wrapping operations (i.e. the current u32)?

(Of course I'm +1 on the RFC as it is.)

@thestinger I agree that performance and not foreclosing on imprecise exceptions are important. But it's not clear to me that the present RFC would create any constraints in this area. I see this text:

However, when running in debug mode (in other words, whenever a debug_assert! would be compiled in) they are required to emit code to perform dynamic checks on all arithmetic operations that may potentially overflow.

I interpret this to mean "whenever debug_assert!s are enabled, overflow must also be checked (somehow)", as opposed to "overflow must be checked using debug_assert!s".

@arielb1

This comment has been minimized.

Show comment
Hide comment
@arielb1

arielb1 Jan 8, 2015

Contributor

By the way, wraparound operations are also useful for "odometer comparison" (which is (a - b) as iKK < 0)

Contributor

arielb1 commented Jan 8, 2015

By the way, wraparound operations are also useful for "odometer comparison" (which is (a - b) as iKK < 0)

@bill-myers

This comment has been minimized.

Show comment
Hide comment
@bill-myers

bill-myers Jan 8, 2015

@glaebhoerl

m32 would be the same as current u32 except it would not support the <, >, <=, >=, /, % and >> operators (and any other where casting to s32 or u32 and back yields a different result). It might also be printed differently, for instance the value 42 as "42 mod 2^32".

bill-myers commented Jan 8, 2015

@glaebhoerl

m32 would be the same as current u32 except it would not support the <, >, <=, >=, /, % and >> operators (and any other where casting to s32 or u32 and back yields a different result). It might also be printed differently, for instance the value 42 as "42 mod 2^32".

@thestinger

This comment has been minimized.

Show comment
Hide comment
@thestinger

thestinger Jan 9, 2015

@glaebhoerl: So does let _x = int::MAX + 1 report an error? If it must, then it can't ever perform well in the future.

thestinger commented Jan 9, 2015

@glaebhoerl: So does let _x = int::MAX + 1 report an error? If it must, then it can't ever perform well in the future.

@Aatch

This comment has been minimized.

Show comment
Hide comment
@Aatch

Aatch Jan 9, 2015

Contributor

@thestinger it does report an error. No it "must" not do so. If you'd read the RFC, you'd know that overflow produces an unspecified value. In builds where a debug_assert! is not removed (currently the absense of a --cfg ndebug flag), it will report an error.

Contributor

Aatch commented Jan 9, 2015

@thestinger it does report an error. No it "must" not do so. If you'd read the RFC, you'd know that overflow produces an unspecified value. In builds where a debug_assert! is not removed (currently the absense of a --cfg ndebug flag), it will report an error.

@thestinger

This comment has been minimized.

Show comment
Hide comment
@thestinger

thestinger Jan 9, 2015

If you'd read the RFC, you'd know that overflow produces an unspecified value.

I've read the RFC and I'm responding to it.

In builds where a debug_assert! is not removed (currently the absense of a --cfg ndebug flag), it will report an error.

So it strictly reports an error on overflow by default which is the only case that really matters. Even Rust's own nightly releases aren't build with ndebug. You don't get to hand wave away significant performance issues because there's an obscure way to turn it off where the chosen semantics simply aren't enforced properly.

thestinger commented Jan 9, 2015

If you'd read the RFC, you'd know that overflow produces an unspecified value.

I've read the RFC and I'm responding to it.

In builds where a debug_assert! is not removed (currently the absense of a --cfg ndebug flag), it will report an error.

So it strictly reports an error on overflow by default which is the only case that really matters. Even Rust's own nightly releases aren't build with ndebug. You don't get to hand wave away significant performance issues because there's an obscure way to turn it off where the chosen semantics simply aren't enforced properly.

@huonw

This comment has been minimized.

Show comment
Hide comment
@huonw

huonw Jan 9, 2015

Member

So it strictly reports an error on overflow by default which is the only case that really matters. Even Rust's own nightly releases aren't build with ndebug. You don't get to hand wave away significant performance issues because there's an obscure way to turn it off where the chosen semantics simply aren't enforced properly.

The current build system defaults shouldn't influence language design; in any case, the defaults should be changed, and somewhat coincidentally there was actually some discussion along these lines in #rust-internals in the past few days (it's no-one's problem if they missed that discussion, just some extra commentary :) ).

If it's ever going to use hardware overflow instructions and cause an error only on-demand, then it can't panic or report precise errors. Having it use debug_assert! means it's going to add a huge amount of overhead to unoptimized builds regardless of any future optimizations. Anyway, it will never be free because operations with potential side effects are clearly going to block code motion optimizations in many cases where pure ones would not.

I wonder if we could tweak this part of the detailed design

However, when running in debug mode (in other words, whenever a debug_assert! would be compiled in), they are required to emit code to perform dynamic checks on all arithmetic operations that may potentially overflow.

slightly, to more explicitly allow e.g. coalescing multiple checks so that each possibly-overflowing operation may not be checked immediately, while still guaranteeing that any overflow will detected at some point (in any case, this doesn't seem to be disallowed AFAICT). Also, I'll note that, as written, the RFC does not require the use of debug_assert itself, just that whatever mechanism is chosen is active when debug_asserts are.

Member

huonw commented Jan 9, 2015

So it strictly reports an error on overflow by default which is the only case that really matters. Even Rust's own nightly releases aren't build with ndebug. You don't get to hand wave away significant performance issues because there's an obscure way to turn it off where the chosen semantics simply aren't enforced properly.

The current build system defaults shouldn't influence language design; in any case, the defaults should be changed, and somewhat coincidentally there was actually some discussion along these lines in #rust-internals in the past few days (it's no-one's problem if they missed that discussion, just some extra commentary :) ).

If it's ever going to use hardware overflow instructions and cause an error only on-demand, then it can't panic or report precise errors. Having it use debug_assert! means it's going to add a huge amount of overhead to unoptimized builds regardless of any future optimizations. Anyway, it will never be free because operations with potential side effects are clearly going to block code motion optimizations in many cases where pure ones would not.

I wonder if we could tweak this part of the detailed design

However, when running in debug mode (in other words, whenever a debug_assert! would be compiled in), they are required to emit code to perform dynamic checks on all arithmetic operations that may potentially overflow.

slightly, to more explicitly allow e.g. coalescing multiple checks so that each possibly-overflowing operation may not be checked immediately, while still guaranteeing that any overflow will detected at some point (in any case, this doesn't seem to be disallowed AFAICT). Also, I'll note that, as written, the RFC does not require the use of debug_assert itself, just that whatever mechanism is chosen is active when debug_asserts are.

@thestinger

This comment has been minimized.

Show comment
Hide comment
@thestinger

thestinger Jan 9, 2015

The current build system defaults shouldn't influence language design

The ridiculously poor performance of Rust's debug builds is a major problem, and this is an enormous step in the wrong direction. The RFC _is_ heavily influenced by the current compiler implementation because it's based on the assumption that debug builds are already slow. However, it's going to severely aggravate that problem and the inability to mix debugging with optimizations is a fixable LLVM issue which is not there for other compilers like GCC. It's the norm to use an optimized debug build with GCC (-Og -g or even -O2 -g) because it doesn't lose any of the information when it's doing stuff like inlining.

You can't claim that language design shouldn't be influenced by the build system defaults while supporting an RFC that's explicitly designed around a concept of debug and release builds.

thestinger commented Jan 9, 2015

The current build system defaults shouldn't influence language design

The ridiculously poor performance of Rust's debug builds is a major problem, and this is an enormous step in the wrong direction. The RFC _is_ heavily influenced by the current compiler implementation because it's based on the assumption that debug builds are already slow. However, it's going to severely aggravate that problem and the inability to mix debugging with optimizations is a fixable LLVM issue which is not there for other compilers like GCC. It's the norm to use an optimized debug build with GCC (-Og -g or even -O2 -g) because it doesn't lose any of the information when it's doing stuff like inlining.

You can't claim that language design shouldn't be influenced by the build system defaults while supporting an RFC that's explicitly designed around a concept of debug and release builds.

@thestinger

This comment has been minimized.

Show comment
Hide comment
@thestinger

thestinger Jan 9, 2015

It's absurd to draw a line between language and implementation in order to ignore the fact that out-of-the-box performance will be severely impacted. The practical implications of an RFC are very relevant and cannot be ignored by refusing to talk about anything but pure programming language design. Rust's optimization process takes a very long time so the performance of unoptimized builds is not unimportant considering the productivity loss from optimizing.

I can't see it as a step in the right direct unless it's an explicitly an opt-in feature in all cases. It would be fine to tie it to debug_assert! as long as that's never enabled by default. Otherwise, the language is just becoming even more unusable without gaining a significant level of robustness in return.

thestinger commented Jan 9, 2015

It's absurd to draw a line between language and implementation in order to ignore the fact that out-of-the-box performance will be severely impacted. The practical implications of an RFC are very relevant and cannot be ignored by refusing to talk about anything but pure programming language design. Rust's optimization process takes a very long time so the performance of unoptimized builds is not unimportant considering the productivity loss from optimizing.

I can't see it as a step in the right direct unless it's an explicitly an opt-in feature in all cases. It would be fine to tie it to debug_assert! as long as that's never enabled by default. Otherwise, the language is just becoming even more unusable without gaining a significant level of robustness in return.

@thestinger

This comment has been minimized.

Show comment
Hide comment
@thestinger

thestinger Jan 9, 2015

How many real overflow bugs were found while converting the compiler? That's a 200k+ line code so if it has any value at all there should be dozens of bug fixes to point at. Each case of intentional overflow is a case where this feature had a negative impact. The ratio between those is what determines the value of the feature vs. the cost of the feature. An acceptable ratio is a subjective issue, but the ratio itself is simply an important number to discuss here. You may be willing to accept hundreds or thousands of cases of "friction" in order to catch one bug, but it will drive away people would find the language too abrasive.

thestinger commented Jan 9, 2015

How many real overflow bugs were found while converting the compiler? That's a 200k+ line code so if it has any value at all there should be dozens of bug fixes to point at. Each case of intentional overflow is a case where this feature had a negative impact. The ratio between those is what determines the value of the feature vs. the cost of the feature. An acceptable ratio is a subjective issue, but the ratio itself is simply an important number to discuss here. You may be willing to accept hundreds or thousands of cases of "friction" in order to catch one bug, but it will drive away people would find the language too abrasive.

@huonw

This comment has been minimized.

Show comment
Hide comment
@huonw

huonw Jan 9, 2015

Member

the inability to mix debugging with optimizations is a fixable LLVM issue which is not there for other compilers like GCC. It's the norm to use an optimized debug build with GCC (-Og -g or even -O2 -g) because it doesn't lose any of the information when it's doing stuff like inlining.

I'm afraid I don't understand why this current failing of LLVM is important. Could you expand on why integer overflow detection would be influenced by LLVM being able to build optimised debuggable code?

You can't claim that language design shouldn't be influence by the build system defaults while supporting an RFC that's explicitly designed around a concept of debug and release builds.

I was specifically talking about Rust's makefiles currently defaulting to passing --cfg ndebug (as, I think, you were above), not the more abstract concept of a debug/release build.

Phrasing it the latter terms: Rust current defaults to a sort-of debug build, we could/should make it default to a release build. This seems mostly independent of detecting integer overflow in debug builds, to me?

It's absurd to draw a line between language and implementation in order to ignore the fact that out-of-the-box performance will be severely impacted.

I'm not doing that, I personally haven't said anything about performance or otherwise, just correcting/clarifying/commenting on some subpoints made earlier in this discussion, trying to keep everything as true (given what I personally understand) as possible. Whether this leads us to doing overflow detection or not is another matter.

It would be fine to tie it to debug_assert! as long as that's never enabled by default.

Are you saying that you would be OK with the RFC if it was adjusted to explicitly require using the debug_assert! macro, in an opt-in manner? (NB. it currently doesn't require using debug_assert! at all, it doesn't even require unwinding.)

Member

huonw commented Jan 9, 2015

the inability to mix debugging with optimizations is a fixable LLVM issue which is not there for other compilers like GCC. It's the norm to use an optimized debug build with GCC (-Og -g or even -O2 -g) because it doesn't lose any of the information when it's doing stuff like inlining.

I'm afraid I don't understand why this current failing of LLVM is important. Could you expand on why integer overflow detection would be influenced by LLVM being able to build optimised debuggable code?

You can't claim that language design shouldn't be influence by the build system defaults while supporting an RFC that's explicitly designed around a concept of debug and release builds.

I was specifically talking about Rust's makefiles currently defaulting to passing --cfg ndebug (as, I think, you were above), not the more abstract concept of a debug/release build.

Phrasing it the latter terms: Rust current defaults to a sort-of debug build, we could/should make it default to a release build. This seems mostly independent of detecting integer overflow in debug builds, to me?

It's absurd to draw a line between language and implementation in order to ignore the fact that out-of-the-box performance will be severely impacted.

I'm not doing that, I personally haven't said anything about performance or otherwise, just correcting/clarifying/commenting on some subpoints made earlier in this discussion, trying to keep everything as true (given what I personally understand) as possible. Whether this leads us to doing overflow detection or not is another matter.

It would be fine to tie it to debug_assert! as long as that's never enabled by default.

Are you saying that you would be OK with the RFC if it was adjusted to explicitly require using the debug_assert! macro, in an opt-in manner? (NB. it currently doesn't require using debug_assert! at all, it doesn't even require unwinding.)

@thestinger

This comment has been minimized.

Show comment
Hide comment
@thestinger

thestinger Jan 9, 2015

I'm afraid I don't understand why this current failing of LLVM is important. Could you expand on why integer overflow detection would be influenced by LLVM being able to build optimised debuggable code?

I'm not the one tying it to debug vs. release builds.

I was specifically talking about Rust's makefiles currently defaulting to passing --cfg ndebug (as, I think, you were above), not the more abstract concept of a debug/release build.

Phrasing it the latter terms: Rust current defaults to a sort-of debug build, we could/should make it default to a release build. This seems mostly independent of detecting integer overflow in debug builds, to me?

I'm not talking about Rust's build system. I'm talking about rustc. I don't think it's acceptable to have this performance hit by default with a flag for opting out of it. I'm strongly against doing anything like this unless it's clear that the performance hit is opt-in.

I'm not doing that, I personally haven't said anything about performance or otherwise, just correcting/clarifying/commenting on some subpoints made earlier in this discussion, trying to keep everything as true (given what I personally understand) as possible. Whether this leads us to doing overflow detection or not is another matter.

You claimed that the defaults used by rustc are not relevant... I think it's very relevant if there's going to be a 20-100% performance hit unless people remember to pass an extra flag.

Are you saying that you would be OK with the RFC if it was adjusted to explicitly require using the debug_assert! macro, in an opt-in manner? (NB. it currently doesn't require using debug_assert! at all, it doesn't even require unwinding.)

I'm saying that I would be fine with it if it was made clear that this isn't going to be enabled when you use rustc foo.rs or rustc -O foo.rs and it's shown that it catches enough bugs to make up for the extra friction. If it hasn't caught a significant number of bugs in rustc, that's an indication that the added friction didn't come with any benefits.

thestinger commented Jan 9, 2015

I'm afraid I don't understand why this current failing of LLVM is important. Could you expand on why integer overflow detection would be influenced by LLVM being able to build optimised debuggable code?

I'm not the one tying it to debug vs. release builds.

I was specifically talking about Rust's makefiles currently defaulting to passing --cfg ndebug (as, I think, you were above), not the more abstract concept of a debug/release build.

Phrasing it the latter terms: Rust current defaults to a sort-of debug build, we could/should make it default to a release build. This seems mostly independent of detecting integer overflow in debug builds, to me?

I'm not talking about Rust's build system. I'm talking about rustc. I don't think it's acceptable to have this performance hit by default with a flag for opting out of it. I'm strongly against doing anything like this unless it's clear that the performance hit is opt-in.

I'm not doing that, I personally haven't said anything about performance or otherwise, just correcting/clarifying/commenting on some subpoints made earlier in this discussion, trying to keep everything as true (given what I personally understand) as possible. Whether this leads us to doing overflow detection or not is another matter.

You claimed that the defaults used by rustc are not relevant... I think it's very relevant if there's going to be a 20-100% performance hit unless people remember to pass an extra flag.

Are you saying that you would be OK with the RFC if it was adjusted to explicitly require using the debug_assert! macro, in an opt-in manner? (NB. it currently doesn't require using debug_assert! at all, it doesn't even require unwinding.)

I'm saying that I would be fine with it if it was made clear that this isn't going to be enabled when you use rustc foo.rs or rustc -O foo.rs and it's shown that it catches enough bugs to make up for the extra friction. If it hasn't caught a significant number of bugs in rustc, that's an indication that the added friction didn't come with any benefits.

@huonw

This comment has been minimized.

Show comment
Hide comment
@huonw

huonw Jan 9, 2015

Member

I'm not talking about Rust's build system. I'm talking about rustc. I don't think it's acceptable to have this performance hit by default with a flag for opting out of it. I'm strongly against doing anything like this unless it's clear that the performance hit is opt-in.

You said above:

Even Rust's own nightly releases aren't build with ndebug.

I was refuting the relevance of this.

You claimed that the defaults used by rustc are not relevant... I think it's very relevant if there's going to be a 20-100% performance hit unless people remember to pass an extra flag.

I specifically claimed that:

The current build system defaults shouldn't influence language design

We should evaluate this feature on its own merits, not on what we happen to do in some Makefiles right this minute. Once the feature is accepted, we then chose the defaults for rustc appropriately. (I agree that the current choice is probably not what we want---with or without integer overflow detection---but fortunately it's easy to change.)

Member

huonw commented Jan 9, 2015

I'm not talking about Rust's build system. I'm talking about rustc. I don't think it's acceptable to have this performance hit by default with a flag for opting out of it. I'm strongly against doing anything like this unless it's clear that the performance hit is opt-in.

You said above:

Even Rust's own nightly releases aren't build with ndebug.

I was refuting the relevance of this.

You claimed that the defaults used by rustc are not relevant... I think it's very relevant if there's going to be a 20-100% performance hit unless people remember to pass an extra flag.

I specifically claimed that:

The current build system defaults shouldn't influence language design

We should evaluate this feature on its own merits, not on what we happen to do in some Makefiles right this minute. Once the feature is accepted, we then chose the defaults for rustc appropriately. (I agree that the current choice is probably not what we want---with or without integer overflow detection---but fortunately it's easy to change.)

@thestinger

This comment has been minimized.

Show comment
Hide comment
@thestinger

thestinger Jan 9, 2015

I was refuting the relevance of this.

I don't see how. It was an example of why the rustc default is what people will use. You're misrepresenting it by taking it out of context and pretending it was an argument about the Rust build system.

We should evaluate this feature on its own merits, not on what we happen to do in some Makefiles right this minute.

I was never talking about build system defaults, so I don't know why you keep coming back to this.

Once the feature is accepted, we then chose the defaults for rustc appropriately. (I agree that the current choice is probably not what we want---with or without integer overflow detection---but fortunately it's easy to change.)

The merits of the feature certainly depend on the interaction with the rest of the language and tooling. There is no indication that the compiler's default for ndebug is going to be changed so you can't hand wave the problem away. It's clear misdirection.

If you don't want to talk about the issue of a significant out-of-the-box performance hit then the RFC should make it clear that the feature is going to be opt-in with the reference implementation. Otherwise, the fact that it's significantly hurting performance by default is very relevant.

thestinger commented Jan 9, 2015

I was refuting the relevance of this.

I don't see how. It was an example of why the rustc default is what people will use. You're misrepresenting it by taking it out of context and pretending it was an argument about the Rust build system.

We should evaluate this feature on its own merits, not on what we happen to do in some Makefiles right this minute.

I was never talking about build system defaults, so I don't know why you keep coming back to this.

Once the feature is accepted, we then chose the defaults for rustc appropriately. (I agree that the current choice is probably not what we want---with or without integer overflow detection---but fortunately it's easy to change.)

The merits of the feature certainly depend on the interaction with the rest of the language and tooling. There is no indication that the compiler's default for ndebug is going to be changed so you can't hand wave the problem away. It's clear misdirection.

If you don't want to talk about the issue of a significant out-of-the-box performance hit then the RFC should make it clear that the feature is going to be opt-in with the reference implementation. Otherwise, the fact that it's significantly hurting performance by default is very relevant.

@kornelski

This comment has been minimized.

Show comment
Hide comment
@kornelski

kornelski Feb 4, 2015

Contributor

@alexchandel I don't share your position, even in case of u8. I work with graphics algorithms and there overflow is sometimes desirable (e.g. PNG decoding requires it), but most often it creates ugly errors (e.g. white areas of the image get black holes).

In video decoding algorithms it's very important to handle overflows in a correct way, but it's also very hard to do due to complexity and performance requirements of the code.

Checked overflow would help me catch these bugs earlier (and perhaps change to saturated add/multiply where it happened) and would make it sooo much easier to find the location that overflows.

Contributor

kornelski commented Feb 4, 2015

@alexchandel I don't share your position, even in case of u8. I work with graphics algorithms and there overflow is sometimes desirable (e.g. PNG decoding requires it), but most often it creates ugly errors (e.g. white areas of the image get black holes).

In video decoding algorithms it's very important to handle overflows in a correct way, but it's also very hard to do due to complexity and performance requirements of the code.

Checked overflow would help me catch these bugs earlier (and perhaps change to saturated add/multiply where it happened) and would make it sooo much easier to find the location that overflows.

@dgrunwald

This comment has been minimized.

Show comment
Hide comment
@dgrunwald

dgrunwald Feb 4, 2015

Contributor

It makes the semantics of + less clear. One would have to search the enclosing contexts to identify whether overflow checks have been explicitly disabled for that region of code. This is a non-local change that could occur at any point in the module hierarchy. This makes reading code harder.

This is an interesting argument. In C#, unchecked/checked is only available as statement or expression. This reduces the "action at a distance" to the current function, so it's easy to see the context when looking at the arithmetic operator.

Contributor

dgrunwald commented Feb 4, 2015

It makes the semantics of + less clear. One would have to search the enclosing contexts to identify whether overflow checks have been explicitly disabled for that region of code. This is a non-local change that could occur at any point in the module hierarchy. This makes reading code harder.

This is an interesting argument. In C#, unchecked/checked is only available as statement or expression. This reduces the "action at a distance" to the current function, so it's easy to see the context when looking at the arithmetic operator.

@aturon aturon referenced this pull request Feb 6, 2015

Closed

Tracking issue for Integer Overflow (RFC 560) #22020

11 of 15 tasks complete

@aturon aturon merged commit 9fb9ad6 into rust-lang:master Feb 6, 2015

@aturon

This comment has been minimized.

Show comment
Hide comment
@aturon

aturon Feb 6, 2015

Member

This RFC is a continuation of a long-running discussion about overflow:

The topic has been covered in considerable depth in those earlier threads as well as on this RFC comment thread itself. While the discussion has been contentious at points, a fairly clear picture of the tradeoffs in each direction has emerged.

There were concerns about the performance hit even if checking is restricted to debugging builds. Some details about the hit and possible mitigations are in this comment. Other mitigation strategies include scoped disabling of checking, as was proposed in the initial RFC.

There were concerns about the clarity of the RFC text. This was partly intentional as the precise details of checking and panicking semantics will likely need some implementation work to fully sort out. But @mkaufmann's comment also helped clarify the text.

There were questions about whether over/underflow detection as proposed would actually catch many bugs. The clearest response is probably @nikomatsakis's comment, which discusses some published research on the topic as well as the need of using fuzzing and other tools in conjunction with the checking proposed here.

A key point that emerged is that, at heart, this RFC is about distinguishing intentional and unintentional over/underflow. There are many ways to make the distinction, but the most promising is by differentiating types, since mixtures of desired and undesired over/underflow for the same value are expected to be rare.

In the end, the core team has decided to move forward with this RFC. If we ever hope to distinguish between intentional wrapping and unintentional wrapping, this is the time. The specifics of interaction with optimization, defaults and opt-outs can be tweaked over time.

Tracking issue

Member

aturon commented Feb 6, 2015

This RFC is a continuation of a long-running discussion about overflow:

The topic has been covered in considerable depth in those earlier threads as well as on this RFC comment thread itself. While the discussion has been contentious at points, a fairly clear picture of the tradeoffs in each direction has emerged.

There were concerns about the performance hit even if checking is restricted to debugging builds. Some details about the hit and possible mitigations are in this comment. Other mitigation strategies include scoped disabling of checking, as was proposed in the initial RFC.

There were concerns about the clarity of the RFC text. This was partly intentional as the precise details of checking and panicking semantics will likely need some implementation work to fully sort out. But @mkaufmann's comment also helped clarify the text.

There were questions about whether over/underflow detection as proposed would actually catch many bugs. The clearest response is probably @nikomatsakis's comment, which discusses some published research on the topic as well as the need of using fuzzing and other tools in conjunction with the checking proposed here.

A key point that emerged is that, at heart, this RFC is about distinguishing intentional and unintentional over/underflow. There are many ways to make the distinction, but the most promising is by differentiating types, since mixtures of desired and undesired over/underflow for the same value are expected to be rare.

In the end, the core team has decided to move forward with this RFC. If we ever hope to distinguish between intentional wrapping and unintentional wrapping, this is the time. The specifics of interaction with optimization, defaults and opt-outs can be tweaked over time.

Tracking issue

bors added a commit to rust-lang/rust that referenced this pull request Mar 1, 2015

Auto merge of #22532 - pnkfelix:arith-overflow, r=pnkfelix
Implements most of rust-lang/rfcs#560. Errors encountered from the checks during building were fixed.

The checks for division, remainder and bit-shifting have not been implemented yet.

cc @Aatch ; cc @nikomatsakis

Manishearth added a commit to Manishearth/rust that referenced this pull request Mar 2, 2015

Rollup merge of #22532 - pnkfelix:arith-overflow, r=pnkfelix
 Rebase and follow-through on work done by @cmr and @Aatch.

Implements most of rust-lang/rfcs#560. Errors encountered from the checks during building were fixed.

The checks for division, remainder and bit-shifting have not been implemented yet.

See also PR #20795

cc @Aatch ; cc @nikomatsakis

bors added a commit to rust-lang/rust that referenced this pull request Mar 2, 2015

Auto merge of #22532 - pnkfelix:arith-overflow, r=pnkfelix
Rebase and follow-through on work done by @cmr and @Aatch.

Implements most of rust-lang/rfcs#560. Errors encountered from the checks during building were fixed.

The checks for division, remainder and bit-shifting have not been implemented yet.

See also PR #20795

cc @Aatch ; cc @nikomatsakis

bors added a commit to rust-lang/rust that referenced this pull request Mar 3, 2015

Auto merge of #22532 - pnkfelix:arith-overflow, r=pnkfelix,eddyb
Rebase and follow-through on work done by @cmr and @Aatch.

Implements most of rust-lang/rfcs#560. Errors encountered from the checks during building were fixed.

The checks for division, remainder and bit-shifting have not been implemented yet.

See also PR #20795

cc @Aatch ; cc @nikomatsakis
@pnkfelix

This comment has been minimized.

Show comment
Hide comment
@pnkfelix

pnkfelix Mar 4, 2015

Member

I am accumulating a list of actual bugs uncovered via the arithmetic overflow checks here:

I welcome others to tell me about any bugs they found via these checks. (Note that I am only interested in actual bugs; not in benign overflows; see the document for more discussion.)

Update: The above document was originally in an etherpad (you can find out which one via its commit history). But now it is in a github repo, so that others can submit pull requests with new stories of bugs arithmetic-overflow found in their code.

Member

pnkfelix commented Mar 4, 2015

I am accumulating a list of actual bugs uncovered via the arithmetic overflow checks here:

I welcome others to tell me about any bugs they found via these checks. (Note that I am only interested in actual bugs; not in benign overflows; see the document for more discussion.)

Update: The above document was originally in an etherpad (you can find out which one via its commit history). But now it is in a github repo, so that others can submit pull requests with new stories of bugs arithmetic-overflow found in their code.

@rocallahan

This comment has been minimized.

Show comment
Hide comment
@rocallahan

rocallahan Mar 4, 2015

I wouldn't expect you to find a lot of integer overflow bugs until you do some intensive fuzz testing.

rocallahan commented Mar 4, 2015

I wouldn't expect you to find a lot of integer overflow bugs until you do some intensive fuzz testing.

@nikomatsakis

This comment has been minimized.

Show comment
Hide comment
@nikomatsakis

nikomatsakis Mar 5, 2015

Contributor

@rocallahan agreed (as we said earlier), but it's still good for historical record to know what bugs were encountered even without fuzz testing (and there certainly have been some).

Contributor

nikomatsakis commented Mar 5, 2015

@rocallahan agreed (as we said earlier), but it's still good for historical record to know what bugs were encountered even without fuzz testing (and there certainly have been some).

@pnkfelix

This comment has been minimized.

Show comment
Hide comment
@pnkfelix

pnkfelix Mar 12, 2015

Member

The RFC has this text:

When truncating, the casting operation as can overflow if the truncated bits contain non-zero values.

I have written an internals post regarding the fine details of what this sentence is saying (or should instead be saying); the post is here:

http://internals.rust-lang.org/t/on-casts-and-checked-overflow/1710

If you had an interest in the discussion here or want to provide input on the future semantics of <expr> as <int-type>, you may want to go read it and write feedback.

(I note in particular both @mahkoh and @quantheory were discussing the details of cast-semantics in the discussion above, and therefore may want to chime in on the internals thread.)

Member

pnkfelix commented Mar 12, 2015

The RFC has this text:

When truncating, the casting operation as can overflow if the truncated bits contain non-zero values.

I have written an internals post regarding the fine details of what this sentence is saying (or should instead be saying); the post is here:

http://internals.rust-lang.org/t/on-casts-and-checked-overflow/1710

If you had an interest in the discussion here or want to provide input on the future semantics of <expr> as <int-type>, you may want to go read it and write feedback.

(I note in particular both @mahkoh and @quantheory were discussing the details of cast-semantics in the discussion above, and therefore may want to chime in on the internals thread.)

@pnkfelix

This comment has been minimized.

Show comment
Hide comment
@pnkfelix

pnkfelix Apr 2, 2015

Member

@glaebhoerl wrote:

As per this discussion with @petrochenkov, I think we should remove unary negation for unsigned types as part of this. (cc also rust-lang/rust#5477)

As a very very last minute change (that should still manage to land in the beta release, if all goes well), based on discussion here: "A tale of two’s complement" and here: "Forbid -(unsigned integer)", the team decided in the most recent weekly meeting to feature-gate the unary-negation operator on unsigned integer values.

This is largely based on following the philosophy presented here on RFC #560: the arithmetic operators should be use in a manner that does not cause overflow; if one wants bitwise operations, then one can e.g. rewrite -expr as !expr + 1, or rather the most general case, as (!expr).wrapping_add(1).

The PR that puts in the feature-gates is: rust-lang/rust#23945

Member

pnkfelix commented Apr 2, 2015

@glaebhoerl wrote:

As per this discussion with @petrochenkov, I think we should remove unary negation for unsigned types as part of this. (cc also rust-lang/rust#5477)

As a very very last minute change (that should still manage to land in the beta release, if all goes well), based on discussion here: "A tale of two’s complement" and here: "Forbid -(unsigned integer)", the team decided in the most recent weekly meeting to feature-gate the unary-negation operator on unsigned integer values.

This is largely based on following the philosophy presented here on RFC #560: the arithmetic operators should be use in a manner that does not cause overflow; if one wants bitwise operations, then one can e.g. rewrite -expr as !expr + 1, or rather the most general case, as (!expr).wrapping_add(1).

The PR that puts in the feature-gates is: rust-lang/rust#23945

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment