encoding/json: unmarshalling null into non-nullable golang types #33835
Does this issue reproduce with the latest release?
As of 1.12, yes.
What did you do?
What did you expect to see?
A type mismatch error, similar to if I'd unmarshalled a string into an int, or any other type mismatch.
What did you see instead?
No error at all, unmarshalling 'succeeds' at deserialising
I'm aware of #2540 , I'm specifically arguing for the reversal of that decision. I'm aware the documentation says "Because null is often used in JSON to mean “not present,” unmarshalling a JSON null into any other Go type has no effect on the value and produces no error.". I'm arguing against that behaviour.
The unmarshaller shouldn't have an opinion on what
If I'm unmarshalling into a specific golang type, I've gone to the effort of statically defining the type that I expect. The unmarshaller should assist me, and reject any poorly typed input (which it does well for all types other than
The text was updated successfully, but these errors were encountered:
Low, since it would change the run-time behavior of the program without a corresponding compile-time error.
Per https://go.googlesource.com/proposal/+/refs/heads/master/design/28221-go2-transitions.md#language-redefinitions: “In order for the Go ecosystem to survive a transition to Go 2, we must minimize these sorts of redefinitions.”
Edit: the code already lets custom
I've been thinking about this, and I'm not sure that adding an option to
Imagine that a project which wants to take advantage of "strict null" interpretation (this proposal) embeds (in the JSON sense, not the Go struct sense) some third-party struct, as from a third-party SDK, a protobuf definition, or something else, which does not expect strict nulls.
The only solution I can think of, but maybe there are others, would be to support this mode by a struct tag:
This leaves the behavior of embedded types unaltered.
Just thinking out loud, you could wrap the
Then you'd get the standard no-error behaviour.
Just trying to understand your example a bit better, if you're using strict nulls, and the
(EDIT: that's not how the current code works, the type would explicitly ignore
If I was the project using strict nulls, I think I'd be ok overriding types to fix up 3rd party types. (I'm already being anal about nulls by opting into strict nulls after all). How do you feel about the overriding approach compared to struct tags?
( And in the magical far flung future, everyone would migrate to strict nulls, and it would become the default :) I can only dream )
It doesn't seem that creative - but I appreciate the compliment nonetheless! This is the exact same thing you do if you want to add json serialisation to an existing type that doesn't have any, such as
You would have to clearly define which of the builtin types are nullable (in the sense that unmarshalling
Which gets a bit murky, because maps and slices have a nil value. But both of those nil values behave like the non-nil-but-empty value. So Golang itself doesn't strongly distinguish between the two, and doesn't really help us decide. The existing
Whoops, my point 5 is wrong - I misread the source code. It is possible to write your own
Which means I could go to the effort of making my own set of
But then I'm either polluting my code with json-specific types, or have to somehow then translate every jsonNotNullX type and every struct that contains a jsonNotNullX type back into the original types. Which is way more code than is worth it for strict
Being wrong about 5 also means that if I wanted to use strictNulls, include 3rd party types like
I can't see 3rd party types ever being fixed to be strict. They can't change the current
We've produced an ecosystem where all publicly exported types are explicitly json-nullable, and I can't imagine that's ever going to change. That's really demotivating.
@ostcar as per the original post, the current behavior is already documented:
Which composes with how arrays are decoded:
So any null JSON array elements result in zero value Go elements being appended to the slice.
I strongly agree that this behavior can't be changed in v1. I've already had multiple reasonable json "bug fixes" reverted in previous releases because they broke too much existing code, and this would almost certainly be reverted as well.
I think we should rephrase this issue to be about giving the developer more control as to whether they want to be lenient and accept nulls anywhere, or be stricter and only allow them in certain places. It seems in line with other previous json proposals like not allowing unknown fields, disallowing case insensitive matching in object keys, and so on. It's certainly an issue we should keep in mind for future JSON API/design changes.