Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(exporter-collector): implement concurrencyLimit option #1708

Merged
merged 3 commits into from Dec 10, 2020

Conversation

dobesv
Copy link
Contributor

@dobesv dobesv commented Dec 2, 2020

This adds an option to the collector exporters concurrencyLimit. If this is set and
the number of export operations is equal to the limit, additional export operations
will fail immediately.

This should be set in combination with the batch span processor be set such that the
concurrency limit would not be reached under "normal" circumstances - only if there
is an issue would spans start to be dropped.

This helps us cap the amount of memory & sockets used by the exporter if it is not
able to keep up with the data it is being provided.

This could happen if the local network (e.g. in a browser) or the remote collector
are too slow to handle all the activity.

If we do not have this cap, and the exporter cannot keep up, resources such as
memory and network sockets can be consumed without limit, causing crashes and
other undesirable outcomes far worse than losing some telemetry data.

This also updates the examples to use BatchSpanProcessor as I couldn't really
think of any reason why you would want to use SimpleSpanProcessor in combination
with the collector exporter.

Fixes #1707

@codecov
Copy link

codecov bot commented Dec 2, 2020

Codecov Report

Merging #1708 (2eda370) into master (1a24f40) will increase coverage by 0.00%.
The diff coverage is 100.00%.

@@           Coverage Diff           @@
##           master    #1708   +/-   ##
=======================================
  Coverage   91.91%   91.92%           
=======================================
  Files         167      167           
  Lines        5567     5573    +6     
  Branches     1183     1186    +3     
=======================================
+ Hits         5117     5123    +6     
  Misses        450      450           
Impacted Files Coverage Δ
...ages/opentelemetry-exporter-collector/src/types.ts 100.00% <ø> (ø)
...ry-exporter-collector/src/CollectorExporterBase.ts 92.30% <100.00%> (+1.00%) ⬆️

@dobesv
Copy link
Contributor Author

dobesv commented Dec 2, 2020

Note: I considered setting some non-infinite default value for concurrentLimit but left it as it is now to avoid changing the behavior of the library.

@dobesv
Copy link
Contributor Author

dobesv commented Dec 2, 2020

I don't know why it thinks the changes are not covered by the unit tests, they seem to be covered to my eye.

Copy link
Member

@dyladan dyladan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The implementation itself LGTM but @obecny wrote most of this code so I'm going to defer to him as to if this is a good idea we should merge or not.

@dobesv dobesv force-pushed the concurrent-export-limit branch 3 times, most recently from 5c8719f to 742a228 Compare December 3, 2020 21:57
This adds an option to the collector exporters `concurrencyLimit`.  If this is set and
the number of export operations is equal to the limit, additional export operations
will fail immediately.

This should be set in combination with the batch span processor be set such that the
concurrency limit would not be reached under "normal" circumstances - only if there
is an issue would spans start to be dropped.

This helps us cap the amount of memory & sockets used by the exporter if it is not
able to keep up with the data it is being provided.

This could happen if the local network (e.g. in a browser) or the remote collector
are too slow to handle all the activity.

 If we do not have this cap, and the exporter cannot keep up, resources such as
 memory and network sockets can be consumed without limit, causing crashes and
 other undesirable outcomes far worse than losing some telemetry data.

 This also updates the examples to use `BatchSpanProcessor` as I couldn't really
 think of any reason why you would want to use SimpleSpanProcessor in combination
 with the collector exporter.
@obecny
Copy link
Member

obecny commented Dec 3, 2020

@dobesv pls do not force push after 1st review is done, it is impossible to see changes then

@dobesv
Copy link
Contributor Author

dobesv commented Dec 3, 2020

@obecny Thanks for the review. I have pushed changes that I hope addressed your comments. Let me know if there's anything more I can do to help.

@dobesv
Copy link
Contributor Author

dobesv commented Dec 3, 2020

@obecny Ah sorry about the force push, it's a habit from other places that want a single commit.

@dyladan
Copy link
Member

dyladan commented Dec 3, 2020

We squash it when we merge anyways. No need to worry about it during the PR process.

Copy link
Member

@obecny obecny left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@obecny obecny added the enhancement New feature or request label Dec 10, 2020
@obecny obecny merged commit b260f89 into open-telemetry:master Dec 10, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Span processors / trace exporters should discard traces if they cannot be sent quickly enough
4 participants