New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added C++ benchmark. #1525
Added C++ benchmark. #1525
Conversation
Review to @xfxyjwf. |
retest this please |
while (state.KeepRunning()) { | ||
const std::string& payload = payloads_[i.Next()]; | ||
total += payload.size(); | ||
m->ParseFromString(payload); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the ArenaParseFixture, a new message is created in every parsing loop, however, in this NoArenaParseFixture, you are reusing the same message. A more fair comparison is probably recreating the message in this parsing loop as well.
If we want to benchmark the case where a message is reused, I guess we can change the ArenaParseFixture to something like:
Arena arena;
Message* m = Arena::CreateMessage<T>(&arena);
while () {
const std::string& payload = payloads_[i.Next()];
total += payload.size();
m->ParseFromString(payload);
if (counter++ % kArenaThreshold == 0) {
arena.reset();
Message* m = Arena::CreateMessage<T>(&arena);
}
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To make this more fair, I split the "NoArena" case into two: one that creates a message from scratch (parse_new
) and one that reuses an existing message (parse_reuse
).
I'm not sure it makes sense to allocate multiple top-level messages in a single arena, but reset it periodically. Does anybody use arenas this way?
If you can point me to some real-world uses of arena that work this way, I'll update the benchmark (or maybe add a new one for that pattern).
4564c7e
to
2bd8724
Compare
Results on my desktop:
|
LGTM Please squash the commits before merging. |
WrappingCounter(size_t limit) : value_(0), limit_(limit) {} | ||
|
||
size_t Next() { | ||
size_t ret = value_; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(value + 1) % limit
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think what I currently have is much faster. "limit" isn't a compile-time constant, so %
will turn into a real idiv
instruction, which is very slow. Mine is a single extremely predictable branch.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I consider this the wrong abstraction of the above one-liner.
If you want to do this wrapping as an abstraction than just abstract the whole payload.
const string& NextPayload() { ...}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I disagree. I think what I have is simpler. It doesn't need to know anything about the type or storage or lifetime of the things being iterated over. It is just a simple wrapping counter.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM overall minor comment
Here are initial benchmark results: