Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add benchmark README and latest numbers #689

Merged
merged 10 commits into from
Jan 24, 2020

Conversation

mayurkale22
Copy link
Member

Which problem is this PR solving?

  • The first iteration of README.md, please contribute or suggest missing things.

@@ -21,7 +21,8 @@ const setups = [

for (const setup of setups) {
console.log(`Beginning ${setup.name} Benchmark...`);
const tracer = setup.registry.getTracer("benchmark");
const registry = setup.registry;
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Below line#59 depends on this.

chore: update readme
@codecov-io
Copy link

codecov-io commented Jan 13, 2020

Codecov Report

Merging #689 into master will decrease coverage by 1.57%.
The diff coverage is 100%.

@@            Coverage Diff             @@
##           master     #689      +/-   ##
==========================================
- Coverage    92.7%   91.12%   -1.58%     
==========================================
  Files         227      225       -2     
  Lines       10346    10541     +195     
  Branches      934      965      +31     
==========================================
+ Hits         9591     9606      +15     
- Misses        755      935     +180
Impacted Files Coverage Δ
...ckages/opentelemetry-exporter-jaeger/src/jaeger.ts 90.56% <100%> (+0.56%) ⬆️
packages/opentelemetry-core/test/utils/url.test.ts 50% <0%> (-50%) ⬇️
...pentelemetry-core/test/internal/validators.test.ts 50% <0%> (-50%) ⬇️
...elemetry-core/test/trace/spancontext-utils.test.ts 55.55% <0%> (-44.45%) ⬇️
...lemetry-core/test/trace/ProbabilitySampler.test.ts 56.52% <0%> (-43.48%) ⬇️
...s/opentelemetry-core/test/trace/NoopTracer.test.ts 60% <0%> (-40%) ⬇️
...s/opentelemetry-core/test/context/B3Format.test.ts 63.39% <0%> (-36.61%) ⬇️
...ges/opentelemetry-core/test/trace/NoopSpan.test.ts 63.63% <0%> (-36.37%) ⬇️
...s/opentelemetry-core/test/trace/tracestate.test.ts 65.06% <0%> (-34.94%) ⬇️
...ntelemetry-core/test/trace/NoRecordingSpan.test.ts 71.42% <0%> (-28.58%) ⬇️
... and 40 more

chore: update readme
@dyladan
Copy link
Member

dyladan commented Jan 13, 2020

Can you add a noop tracer setup?

@dyladan
Copy link
Member

dyladan commented Jan 13, 2020

The difference between NodeTracerRegistry and BasicTracerRegistry is only in setup (1 time per application) and the getTracer call (1 time per plugin). The benchmarks themselves don't do either of these things so there is no actual difference between them. In my opinion, the setups it makes sense to keep are the BasicTracerRegistry and the NoopTracerRegistry.

@mayurkale22 mayurkale22 added the document Documentation-related label Jan 13, 2020
Copy link
Member

@obecny obecny left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so far npm run bench fails for me anyway, does it work for you ?

@OlivierAlbertini
Copy link
Member

so far npm run bench fails for me anyway, does it work for you ?

Indeed, after npm install at the root project

➜ npm run bench 
npm info it worked if it ends with ok
npm info using npm@6.10.2
npm info using node@v10.9.0
npm info lifecycle opentelemetry-base@0.3.2~prebench: opentelemetry-base@0.3.2
npm info lifecycle opentelemetry-base@0.3.2~bench: opentelemetry-base@0.3.2

> opentelemetry-base@0.3.2 bench /Users/.../apps/.../opentelemetry-js
> node benchmark


internal/modules/cjs/loader.js:583
    throw err;
    ^

Error: Cannot find module '@opentelemetry/core'
    at Function.Module._resolveFilename (internal/modules/cjs/loader.js:581:15)
    at Function.Module._load (internal/modules/cjs/loader.js:507:25)
    at Module.require (internal/modules/cjs/loader.js:637:17)
    at require (internal/modules/cjs/helpers.js:20:18)
    at Object.<anonymous> (/Users/.../apps/.../opentelemetry-js/benchmark/tracer.js:4:23)
    at Module._compile (internal/modules/cjs/loader.js:689:30)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:700:10)
    at Module.load (internal/modules/cjs/loader.js:599:32)
    at tryModuleLoad (internal/modules/cjs/loader.js:538:12)
    at Function.Module._load (internal/modules/cjs/loader.js:530:3)
child_process.js:640
    throw err;
    ^

Error: Command failed: node benchmark/tracer
    at checkExecSyncError (child_process.js:600:11)
    at execSync (child_process.js:637:13)
    at exec (/Users/.../apps/.../opentelemetry-js/benchmark/index.js:4:21)
    at Object.<anonymous> (/Users/.../apps/.../opentelemetry-js/benchmark/index.js:6:1)
    at Module._compile (internal/modules/cjs/loader.js:689:30)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:700:10)
    at Module.load (internal/modules/cjs/loader.js:599:32)
    at tryModuleLoad (internal/modules/cjs/loader.js:538:12)
    at Function.Module._load (internal/modules/cjs/loader.js:530:3)
    at Function.Module.runMain (internal/modules/cjs/loader.js:742:12)
npm info lifecycle opentelemetry-base@0.3.2~bench: Failed to exec bench script

@dyladan
Copy link
Member

dyladan commented Jan 17, 2020

I would suggest the following:

  • having the benchmark load the packages directly by file path so that we can avoid having to deal with dependency management in the benchmarks
  • making the number of runs configurable per-file so that the formatters, which are very fast, can run more times than the tracers
  • instead of distinguishing between web and node tracers (there is no distinction to the actual tracer anymore), distinguish between the noop tracer, tracer without span process, and tracer with each span processor.

Made mayurkale22#1 so you can see what i'm talking about.

@obecny
Copy link
Member

obecny commented Jan 17, 2020

For web it will be nice to be able to run benchmark in few popular browsers as sometimes the certain browser might be really slow. For that we might use services like https://www.browserstack.com/ for example

@OlivierAlbertini OlivierAlbertini added the size/M Denotes a PR that changes 30-99 lines, ignoring generated files. label Jan 21, 2020
Apply updates suggested in PR comment
@dyladan dyladan added the Merge:LGTM This PR is ready to be merged by a Maintainer (has enough valid approvals, successful build, etc.) label Jan 21, 2020
@dyladan
Copy link
Member

dyladan commented Jan 22, 2020

Please put the latest numbers in the README after you merged my sub-PR

@mayurkale22
Copy link
Member Author

Please put the latest numbers in the README after you merged my sub-PR

Done.

@mayurkale22
Copy link
Member Author

so far npm run bench fails for me anyway, does it work for you ?

@dyladan made some changes to the PR and npm run bench is also working now. @obecny Can you please verify and approve the PR.

@mayurkale22 mayurkale22 merged commit 69947da into open-telemetry:master Jan 24, 2020
@mayurkale22 mayurkale22 deleted the benchmarrk_readme branch January 24, 2020 23:06
pichlermarc pushed a commit to dynatrace-oss-contrib/opentelemetry-js that referenced this pull request Dec 15, 2023
…etry#689)

* fix: add support for setConfig in ioredis instrumentation

* fix: use latest config in traceSendCommand

* fix: lint errors fixed

* fix: remove override setConfig

* refactor: ioredis instrumentation refactored
moved traceConnection to instrumentation.ts
refactored tests to use setConfig instead of new instrumentation
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
document Documentation-related Merge:LGTM This PR is ready to be merged by a Maintainer (has enough valid approvals, successful build, etc.) size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants