-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Duplicate keys from JSON encoder #81
Comments
Yep, this is definitely something we should document. All the unmarshalers that I've been able to get my hands on do exactly what you've described and keep the last value. |
There's a good article about this on Stack Overflow: The tldr is: RFC 2119 says "SHOULD be unique", which means it's strongly recommended, but it's not a "MUST". However, implementations may still throw errors for duplicate fields. We should definitely document that the user is responsible for avoiding duplicate fields. Throwing out an idea regarding passing loggers around: Currently a logger always has a reference to the top level fields, what if you passed a logger that only had a reference to a nested object, logger := New(...)
_ = newConnection(logger.Nested("conn"))
_ = newRequest(logger.Nested("request")) The nested logger would not be able to add anything to the top level fields, so you could pass in the logger to other components (internal or external) without having to worry about clashes. |
Added a more explicit comment about this in #120, but leaving this issue open while we consider @prashantv's suggestion. |
Created a separate issue for the nested logger: #132 Closing this issue since we don't have plans to enforce unique field names in the logger. We can make it easier using nested logging, but the user is still responsible for ensuring there's no field clashes. |
Thanks. The documentation helps, and nested loggers sound like a good compromise. |
For anyone who was confused by this like I was, remember that the parent logger is unaffected by the child logger created by .With(). So you can create a child logger, write a log (ex. .Info(), .Error()) and then start over with a new child logger to prevent duplicates. Example below: |
I noticed that if I do something like this:
I end up getting a JSON record like this:
Now, the above example probably doesn't make a lot of sense, but it is more realistic if you consider an application that passes loggers around, with various layers adding context with "zap.Logger.With`.
I can't find anything that says duplicate fields are against the spec, and it doesn't seem to break
json.Unmarshal
or Python'sjson.loads
which both end up taking the last"foo":"baz"
, but it still might be worth handling (or even just documenting).The text was updated successfully, but these errors were encountered: