Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add initial benchmark suite #390

Merged
merged 5 commits into from Oct 19, 2019
Merged

Conversation

mayurkale22
Copy link
Member

@mayurkale22 mayurkale22 commented Oct 2, 2019

Which problem is this PR solving?

  • Resolves Add baseline benchmark suite #261

  • This is my initial take on benchmarking suite. I'd love to get your feedback to take this in the right direction.

  • Currently the benchmarks run some very basic operations with the NoopTracer, BasicTracer and NodeTracer implementation on startSpan method with and w/o parent.

Screen Shot 2019-10-02 at 10 07 29 AM

Update1: Added more benchmarks (Tracers and Propagators).
Screen Shot 2019-10-02 at 2 36 50 PM

Key observation: HttpTraceContext inject is better than the B3Format, but the reverse is true for extract operation.

  • Any thoughts?

Copy link
Member

@OlivierAlbertini OlivierAlbertini left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is great! 💯 . Also could we compare to opencensus ?

benchmark/benchmark.js Outdated Show resolved Hide resolved
@@ -0,0 +1,25 @@
'use strict';
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it be possible to use TypeScript for this and execute it with ts-node? Right now it's pretty simple but I expect that over time the benchmarks themselves may become more complex and could benefit from type checking, linting, etc.

Not sure how best to do that - could we possibly but the benchmarks embedded in the packages they test e.g. with a "/benchmark" subfolder like we have a "/test" subfolder? That also organizes benchmarks with each package in case some only apply e.g. to a particular exporter, etc.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we do it to examples as well ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting question ... I actually think that giving users the ability to get to examples quickly feels valuable and so keeping it a top-level GitHub folder probably helps with that.

In contrast, a typical new user might not care about the details of benchmarks, though maybe if we advertised performance in some high performance level graphs in the README that could be cool. I'd mainly be interested in an instrumented vs. un-instrumented app performance there.

I can imagine us eventually wanting a top level "/tests" and "/benchmarks" folders as well that could test or benchmark a whole application that uses lots of different packages together, but for micro-benchmarks, I wonder if putting them with the package makes sense.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 to instrumented vs non-instrumented. I think either benchmarking each plugin, or just providing a package which lets you easily DIY as you've suggested would be awesome.

Copy link
Contributor

@danielkhan danielkhan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wow - very cool!

@codecov-io
Copy link

codecov-io commented Oct 3, 2019

Codecov Report

Merging #390 into master will increase coverage by 0.02%.
The diff coverage is n/a.

@@            Coverage Diff             @@
##           master     #390      +/-   ##
==========================================
+ Coverage   95.39%   95.41%   +0.02%     
==========================================
  Files         124      124              
  Lines        6142     6171      +29     
  Branches      510      509       -1     
==========================================
+ Hits         5859     5888      +29     
  Misses        283      283
Impacted Files Coverage Δ
packages/opentelemetry-metrics/test/Meter.test.ts 100% <0%> (ø) ⬆️

Copy link
Member

@markwolff markwolff left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Assuming at some point we will want to benchmark grpc/other plugins as well?

@@ -0,0 +1,25 @@
'use strict';
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 to instrumented vs non-instrumented. I think either benchmarking each plugin, or just providing a package which lets you easily DIY as you've suggested would be awesome.

Copy link
Member

@bg451 bg451 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

awesome!

@danielkhan danielkhan merged commit 9855636 into open-telemetry:master Oct 19, 2019
lukaswelinder pushed a commit to agile-pm/opentelemetry-js that referenced this pull request Jul 24, 2020
* Use more precise tests than isOk

Signed-off-by: Yuri Shkuro <ys@uber.com>

* Remove getTags introduced earlier

Signed-off-by: Yuri Shkuro <ys@uber.com>
@martinkuba martinkuba mentioned this pull request Aug 31, 2023
3 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add baseline benchmark suite
7 participants