New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
benchmark: add slog #1245
benchmark: add slog #1245
Conversation
Thanks, @ChannyClaus! |
updated the README - seems like the alignment changed but this is what the the tables look fine when rendered, though (https://github.com/ChannyClaus/zap). |
stdlib log returns early if the writer is an io.Discard, without attempting to format the string. This returns an inaccurate result in the benchmark. Use ztest.Discarder to break io.Discard detection.
Thanks!
No, this is fine. It's valid Markdown. It's curious that 'standard library' reports 8 nanoseconds for the last benchmark. https://cs.opensource.google/go/go/+/refs/tags/go1.20.2:src/log/log.go;l=201-203 log.Printf will no-op early if built with an io.Discard. That messes up this benchmark. A workaround for this is to use Zap's ztest.Discarder, which log.Printf will not detect as being an io.Discard. I've pushed a commit doing this to your branch; would you please re-run? Thanks! |
Codecov Report
@@ Coverage Diff @@
## master #1245 +/- ##
=======================================
Coverage 98.23% 98.23%
=======================================
Files 49 49
Lines 3231 3231
=======================================
Hits 3174 3174
Misses 49 49
Partials 8 8 📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more |
nice catch - updated the readme again! standard library benchmark looks slow as expected now 😆 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
README.md
Outdated
| :zap: zap (sugared) | 265 ns/op | -31% | 1 allocs/op | ||
| :zap: zap | 386 ns/op | +0% | 0 allocs/op |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This one looks wrong?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As in, sugared is somehow faster than non-sugared.
Also, looks like previously someone (manually?) sorted zerolog to be on top?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Huh, yeah that's weird. I haven't had a chance to dig into it yet.
Yeah, I think when this was last updated, the author aligned the table and sorted them by time rather than "zap on top and then everything else sorted" which is what the script does.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm i re-generated the readme again a bunch of times and now i'm consistently getting the expected result (i.e. sugared logger is slower by ~20-30%)
i suspect that i likely unintentionally strained my cpu doing something else on my laptop while the sugared benchmark was running (sorry! noob in benchmarking)
previous run seemed to have been inaccurate due to unintentional + simultaneous external load on cpu.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for doing this work!
#1229
not sure how to generate the output in the
README.md
, though - maybe someone can help me out with that?