New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

make i0 a compile error #1593

andrewrk opened this Issue Sep 27, 2018 · 7 comments


None yet
6 participants

andrewrk commented Sep 27, 2018

There was a rejected proposal to remove u0: #1530

However, I believe it does make sense to remove i0. While the range of unsigned integers is 0 to (pow(2, n) − 1), with n = 0, this gives the range [0, 0]. OK. But the range of signed integers is −(pow(2,n−1)) to (pow(2,n−1) − 1), with n = 0, this makes us do pow(2, -1) and that doesn't make sense.

i0 would still be an identifier that maps to a signed 0-bit integer. The compile error would happen if you referenced i0 directly or if you used @IntType to construct a signed 0 bit integer.

@andrewrk andrewrk added the proposal label Sep 27, 2018

@andrewrk andrewrk added this to the 0.4.0 milestone Sep 27, 2018


This comment has been minimized.


winksaville commented Sep 27, 2018

pow(2, -1) is 0.5 and gives a i0 range of -0.5 to -0.5.
Converting -0.5 to an integer it converts to 0.

Checking the value behaviors for i0 and i1 seem reasonable to me:

$ cat i0-i1.zig 
const std = @import("std");
const assert = std.debug.assert;
const warn = std.debug.warn;

test "test.i0" {
    warn("     -pow(2, -1)={}\n", -std.math.pow(f32, 2, -1));
    warn("    pow(2, -1)-1={}\n", std.math.pow(f32, 2, -1)-1);
    warn("floatToInt(-0.5)={}\n", @floatToInt(i32, -0.5));
    var x0: i0 = 0;
    assert(x0 == 0);

test "test.i1" {
    warn("     -pow(2, 0)={}\n", -std.math.pow(f32, 2, 0));
    warn("    pow(2, 0)-1={}\n", std.math.pow(f32, 2, 0)-1);
    var v0: i1 = 0;
    assert(v0 == 0);
    var v1: i1 = -1;
    assert(v1 == -1);
    assert(v0 != v1);
wink@wink-desktop:~/prgs/ziglang/zig-explore/float-to-int (master)
$ zig test i0-i1.zig 
Test 1/2 test.i0...
     -pow(2, -1)=-5.0e-01
    pow(2, -1)-1=-5.0e-01
Test 2/2 test.i1...
     -pow(2, 0)=-1.0e+00
    pow(2, 0)-1=0.0e+00
All tests passed.

This comment has been minimized.


Hejsil commented Sep 27, 2018

From a memory perspective I don't think i0 makes much sense. ix implies that we have (x-1) bits + 1 signed bit. i0 cannot have a signed bit because of its size.

EDIT: Well, it mathematically could. ix is -1 bits + 1 signed bit which indeed gives us 0 bits, but we had to involve negative bit sizes to make this possible.

@andrewrk andrewrk added the accepted label Sep 27, 2018


This comment has been minimized.


winksaville commented Sep 27, 2018

Obviously i0 has no bits, as does u0 and I think the "virtual" value assigined to the type is by convention. So I don't see the need for an explicit sign bit.

Also, the notion of a negative number of bits doesn't seem that different from having no bits.

I'm not trying to be flip, my suggestion is based on the goal that all Types that are members of TypeId.Int should support all operations of every other member. If that goal is not possible and a rational reason can't be found for the deviation from what other members can do, then that type should be considered for removal.

For example, taking the address or determining a structs fields offest in some types is possible while others it is not. I would suggest that if a type has no address or offset then it should be considered for removal.

On the other hand I find it odd that the current lack of support for addresses and offsets for u0. From my perspective u0 always has an address or offset whenever a non-zero type does. And I see no reason i0 is any different.


This comment has been minimized.


thejoshwolfe commented Sep 27, 2018

JavaScript BigInt allows the equivalent of i0 which always has the value 0. (bits is 0, bigint is any integer, mod is 0, step 4 takes the "otherwise" branch.)

I believe the status quo i1 works reasonably, albeit counterintuitively, where the only values are 0 and -1. (Anecdotally, this is how Booleans work in Visual Basic; a Boolean is effectively an i1.)

If we look at the sequence of maximum and minimum values for signed integers as the number of bits approaches 0, we see:

 n | min | max
i5 | -16 |  15
i4 |  -8 |   7
i3 |  -4 |   3
i2 |  -2 |   1
i1 |  -1 |   0
i0 |   ? |   ?

The pattern here is:

  • min(n) = -2**(n - 1). min(0) = -2**(0 - 1) = -0.5
  • max(n) = 2**(n - 1) - 1. max(0) = 2**(0 - 1) - 1 = -0.5

Now that I've worked through this on my own, I see that this is exactly what @winksaville reported above.

Nowhere else in the integer types do we see non-integer limits. For unsigned integers this is the formula:

  • min(n) = 0. min(0) = 0
  • max(n) = 2**n - 1. max(0) = 2**0 - 1 = 0

That clearly suggests that u0 can be equal to 0. But i0 being equal to -0.5 seems very strange.

I do not agree that it makes sense to convert -0.5 into the integer 0. Mathematically, there's no compelling reason to choose 0 instead of -1. Wikipedia provides 6 different ways to deal with rounding half integers to the nearest integer with various arguments for each way. Additionally, you can consider flooring to -1, truncating to 0, or any number of other methods for converting non-integers to integers. My point here is that it's not so obvious that the language should bake the assumption into its semantics.

i0 is too weird, and doesn't make sense.


This comment has been minimized.


winksaville commented Sep 27, 2018

I disagree that i0 weird enough to remove at this stage, If there is no reasonable value for i0 then it should be removed. But, I'd say 0 is a reasonable value. If we can agree on the "properties" of TypeId.Int and we can have all iX's and uX's adhere to these properties then that would be best. The consistency of allowing X to range from 0..N where N is now 128 would be preferrable to exceptions for any particular value of X. IMHO.

@andrewrk andrewrk modified the milestones: 0.4.0, 0.5.0 Sep 28, 2018


This comment has been minimized.

wirelyre commented Oct 3, 2018

Perhaps some fresh eyes can help untangle this.

The only material difference between i𝑛 and u𝑛 seems to be comparison. You can rewrite "less than" for signed integers in terms of @bitCasted unsigned integers T:

if ((a <= @maxValue(T) / 2) and (b <= @maxValue(T) / 2)
  or (a > @maxValue(T) / 2) and (b > @maxValue(T) / 2))
    a < b
    a > @maxValue(T) / 2

In the case of u0, this reduces to a < b since the first condition is always true. (Actually, it reduces to false since the complete type is u0 = {0}, so the only relation is 0 < 0 == false.)

No need for negative bits or non-integer ranges. They arise from the assumption that the range of signed integers is evenly partitioned into negative and non-negative numbers. This is obviously false if there is only one integer value.

Under this logical scheme, u0 and i0 are treated as genuine integer types with a single value 0, rather than fancy aliases of the unit type.


This comment has been minimized.


scurest commented Oct 3, 2018

The minimum/maximum makes sense if you phrase the upper bound exclusively (Dijkstra argues this is the preferred way to phrase an interval): in is the set { x in Z | -2^(n-1) <= x < 2^(n-1) }, so i0 is { x in Z | -1/2 <= x < 1/2 } = { 0 }.

Also @IntType(s, n) is the image of @IntType(s, n+1) under the map of integers x ↦ trunc(x/2) which again suggests i0 should be {0}. (It also suggests u0 should be {0}, which it is.)

IMO it also makes sense to have it just for symmetry with u0.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment