Join GitHub today
GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together.Sign up
proposal: spec: asymmetry between const and var conversions #6923
This is not a request for a language change; I am just documenting a weakness of the current const conversion rules: it confuses people that one can convert -1 to a uint, but only if the -1 is stored in variable. That is, var s = uint(-1) is illegal: constant -1 overflows uint That is clear, but it's also clear what I mean when I write this and it's a shame that I can't express what I mean as a constant, especially since var m = -1 var s = uint(m) works. There is a clumsy workaround for this case, involving magic bit operations, but the problem can turn up in other less avoidable ways: const N int = 1234 const x int = N*1.5 fails yet const N = 1234 const x int = N*1.5 succeeds. (Note the missing "int" in the declaration of N.) This can be rewritten as const x int = N*3/2 but if the floating point constant is itself named (as with the -1 in the uint example), it becomes impossible to express the constant value in Go even though its value seems clear. Again, not asking for a change, just pointing out a clumsy result of the existing rules
We may be able to address this in a fully backward-compatible way:
const x uint = -1
This doesn't work because -1 cannot be (implicitly) converted to a uint.
Since such code is not valid at the moment, no existing code should be affected. Code like this
const x uint64 = -1
would still not be permitted. But using an explicit conversion one could write
const x = uint(-1)
Right now T(c) where T is a type and c is a constant means to treat c as having type T rather than one of the default types. It gives an error if c can not be represented in T, except that for float and complex constants we quietly round to T as long as the value is not too large (I'm not sure that last bit is in the spec).
I think you are suggesting that T(c) is always permitted, but that implies that we do a type conversion, and a type conversion only makes sense if we know the type we are starting from. What type would that be? In particular, if the int type is 32 bits, what does uint64(-0x100000000) mean? That value can not be represented in a 32-bit int, and it can not be represented as a uint64. So what value do we start from when converting to uint64?
My point is of course not that we can not answer that question, but that this is not an area where it is trivial to make everyone happy.
@ianlancetaylor That's a good point. @minux 's suggestion could work (as in uint(int(-1)), or because -1 would "default" to int, uint(-1) - but defaulting to int cuts precision where we may not want it).
But I think it's not that bad because we can easily give "meaning" to an integer constant by defining it's concrete representation as two's complement like we do for variables - w/o defining that representation we couldn't explain uint(x) for an int variable x either.
For a start, let's just consider typed and untyped integer constants x: In either case, they would be considered as represented in "infinite precision" two's complement. Then, conversions of the form T(x), where x is another integer type would simply apply any truncation needed and assign a type. E.g.,
int16(0x12345678) = 0x5678 of type int16
For floating point numbers it's similar. An untyped floating point constant would be arbitrarily precise, converting to a float32 and float64 would cut precision to 24 or 53bits respectively, and possibly underflow (value might become +/-0) or overflow (value might become +/-Inf - some of this is currently unspecified for variables but could be tied down, incl. rounding mode).
Along the same lines I think one could make float->int and int->float conversions work.
But we may not need to go as far. The concrete issue is conversions between integer types. We could be pragmatic and simply state that integer constants are considered represented in infinite precision two's complement, and explicit type conversions do the "obvious" truncation/sign extension and type assignment.
This recently bit me. Took me a minute to nail down the exact issue. I'm fairly new to go, but couldn't we add an optional
The optionality of the
Letting these conversions silently over/underflow via truncation is really confusing.
I was about to file an issue on this, specifically over:
My proposed change (spec-wise, I don't know anything about the compiler) is basically:
In short, if you use
This does get more complicated in cases where the value is trickier. We know what we mean by
Mostly I just want to be able to write