Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

proposal: spec: asymmetry between const and var conversions #6923

robpike opened this issue Dec 10, 2013 · 7 comments


Copy link

commented Dec 10, 2013

This is not a request for a language change; I am just documenting a weakness of the
current const conversion rules: it confuses people that one can convert -1 to a uint,
but only if the -1 is stored in variable. That is,

var s = uint(-1)

is illegal: constant -1 overflows uint

That is clear, but it's also clear what I mean when I write this and it's a shame that I
can't express what I mean as a constant, especially since

var m = -1
var s = uint(m)

works. There is a clumsy workaround for this case, involving magic bit operations, but
the problem can turn up in other less avoidable ways:

const N int = 1234
const x int = N*1.5

fails yet

const N = 1234
const x int = N*1.5

succeeds. (Note the missing "int" in the declaration of N.) This can be
rewritten as

const x int = N*3/2

but if the floating point constant is itself named (as with the -1 in the uint example),
it becomes impossible to express the constant value in Go even though its value seems

Again, not asking for a change, just pointing out a clumsy result of the existing rules

This comment has been minimized.

Copy link

commented Mar 4, 2014

Comment 1:

Labels changed: added release-none.


This comment has been minimized.

Copy link

commented Feb 12, 2015

We may be able to address this in a fully backward-compatible way:

  1. We absolutely want the compiler to complain when we write something like

const x uint = -1
var x uint = -1

This doesn't work because -1 cannot be (implicitly) converted to a uint.

  1. But we could make a distinction between implicit constant conversions (such as the ones above) and explicit conversions of the form T(x) as in uint(-1). If we had this distinction, we could still disallow currently invalid constant conversions that are implicit, but we could permit explicit constant conversions that are currently not permitted (but would be if the values were variables).

Since such code is not valid at the moment, no existing code should be affected. Code like this

const x uint64 = -1

would still not be permitted. But using an explicit conversion one could write

const x = uint(-1)


This comment has been minimized.

Copy link

commented Feb 12, 2015

Right now T(c) where T is a type and c is a constant means to treat c as having type T rather than one of the default types. It gives an error if c can not be represented in T, except that for float and complex constants we quietly round to T as long as the value is not too large (I'm not sure that last bit is in the spec).

I think you are suggesting that T(c) is always permitted, but that implies that we do a type conversion, and a type conversion only makes sense if we know the type we are starting from. What type would that be? In particular, if the int type is 32 bits, what does uint64(-0x100000000) mean? That value can not be represented in a 32-bit int, and it can not be represented as a uint64. So what value do we start from when converting to uint64?

My point is of course not that we can not answer that question, but that this is not an area where it is trivial to make everyone happy.


This comment has been minimized.

Copy link

commented Feb 12, 2015


This comment has been minimized.

Copy link

commented Feb 12, 2015

@ianlancetaylor That's a good point. @minux 's suggestion could work (as in uint(int(-1)), or because -1 would "default" to int, uint(-1) - but defaulting to int cuts precision where we may not want it).

But I think it's not that bad because we can easily give "meaning" to an integer constant by defining it's concrete representation as two's complement like we do for variables - w/o defining that representation we couldn't explain uint(x) for an int variable x either.

For a start, let's just consider typed and untyped integer constants x: In either case, they would be considered as represented in "infinite precision" two's complement. Then, conversions of the form T(x), where x is another integer type would simply apply any truncation needed and assign a type. E.g.,

int16(0x12345678) = 0x5678 of type int16
byte(-1) = 0xff of type byte


For floating point numbers it's similar. An untyped floating point constant would be arbitrarily precise, converting to a float32 and float64 would cut precision to 24 or 53bits respectively, and possibly underflow (value might become +/-0) or overflow (value might become +/-Inf - some of this is currently unspecified for variables but could be tied down, incl. rounding mode).

Along the same lines I think one could make float->int and int->float conversions work.

But we may not need to go as far. The concrete issue is conversions between integer types. We could be pragmatic and simply state that integer constants are considered represented in infinite precision two's complement, and explicit type conversions do the "obvious" truncation/sign extension and type assignment.

@rsc rsc added this to the Unplanned milestone Apr 10, 2015

@rsc rsc removed release-none labels Apr 10, 2015

@rsc rsc changed the title spec: asymmetry between const and var conversions proposal: spec: asymmetry between const and var conversions Jun 20, 2017

@rsc rsc added the Go2 label Jun 20, 2017


This comment has been minimized.

Copy link

commented Dec 8, 2017

This recently bit me. Took me a minute to nail down the exact issue. I'm fairly new to go, but couldn't we add an optional ok param to these type conversions? Similar to interface casting/conversions:

foo := -1
bar, ok := uint8(foo)
if !ok {
  panic(fmt.Errorf("%v is could not be converted to a Uint8!", foo))

The optionality of the ok return values would keep it backwards compatible while also providing a way to check for the problem.

Letting these conversions silently over/underflow via truncation is really confusing.


This comment has been minimized.

Copy link

commented Aug 5, 2019

I was about to file an issue on this, specifically over:

uint64(^0) -> not valid
^uint64(0) -> valid

My proposed change (spec-wise, I don't know anything about the compiler) is basically:

The mask used by the unary bitwise complement operator ^ matches the rule for non-constants: the mask is all 1s for unsigned constants and -1 for signed and untyped constants.


The mask used by the unary bitwise complement operator ^ matches the rule for non-constants: the mask is all 1s for constants interpreted as unsigned values and -1 for constants interpreted as signed values.

In short, if you use ^0 in a context where the compiler expects an unsigned constant, it should do the same thing it would have done if you'd specified ^uintN(0).

This does get more complicated in cases where the value is trickier. We know what we mean by uint(-1). It's less obvious what, if anything, would be meant by uint64(^-4294967296).

Mostly I just want to be able to write ^0 without having to think about the specific uint type I want it to be.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
None yet
7 participants
You can’t perform that action at this time.