Add benchmarks to compare text and protobuf parsing. #53
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
@brian-brazil @juliusv
I guess that means for high sample ingestion rates, we should aim for (optionally) supporting protobuf in the new Java client, too...
On the other hand: As the parsing is happening in parallel with many scrapes, the bottleneck for sample ingestion on spinning disk machines is persisting the chunks to disk. (With more efficient compression for chunks, that might change, though.) On SSD, the bottleneck (with protobuf ingestion) appeared to be the fingerprint calculation, which happens serially at the moment. Should anybody ever want more than 50k samples/sec ingestion, we would improve the fingerprint calculation first.
Still, especially for situations where CPU is tighter (and the server is also serving expensive queries and such), a 4x speed-up on parsing seems worth the deal...