Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add AlibabaCloud LogService exporter #259

Merged
merged 10 commits into from
Jun 12, 2020

Conversation

shabicheng
Copy link
Contributor

Description:

This PR introduces an exporter for AlibabaCloud LogService. The exporter works by translating spans and metrics into the LogService protobuf format expected by LogService, and sending over HTTP.

The trace model is similar to Jaeger on Aliyun Log Service. The metrics model in LogService can be used as Prometheus's remote adapter.

Testing:

Unit tests added for translating resources, spans, and metrics to the LogService model. Now code coverage is 86.5%.

Manually tested, sending to an LogService's instance.

Documentation:

Added a README, which describes the exporter's config.

@shabicheng shabicheng requested a review from a team as a code owner May 26, 2020 14:39
@codecov
Copy link

codecov bot commented May 26, 2020

Codecov Report

Merging #259 into master will increase coverage by 1.25%.
The diff coverage is 100.00%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #259      +/-   ##
==========================================
+ Coverage   79.64%   80.90%   +1.25%     
==========================================
  Files         163      169       +6     
  Lines        8338     8887     +549     
==========================================
+ Hits         6641     7190     +549     
  Misses       1343     1343              
  Partials      354      354              
Impacted Files Coverage Δ
exporter/alibabacloudlogserviceexporter/factory.go 100.00% <100.00%> (ø)
...alibabacloudlogserviceexporter/metrics_exporter.go 100.00% <100.00%> (ø)
...oudlogserviceexporter/metricsdata_to_logservice.go 100.00% <100.00%> (ø)
...r/alibabacloudlogserviceexporter/trace_exporter.go 100.00% <100.00%> (ø)
...cloudlogserviceexporter/tracedata_to_logservice.go 100.00% <100.00%> (ø)
...xporter/alibabacloudlogserviceexporter/uploader.go 100.00% <100.00%> (ø)
... and 2 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 9af3381...8dcb469. Read the comment docs.

Copy link
Contributor

@jrcamp jrcamp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On shutdown exporters should flush any internal queues to ensure all metrics/traces are sent before returning. Does the producer you're using have those and is there a way of waiting for them to finish if so?

Comment on lines 13 to 14
- `max_retry` (optional): max retry count when send fail.
- `max_buffer_size` (optional): max buffer used in memory.
Copy link
Contributor

@jrcamp jrcamp Jun 5, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@shabicheng Apologies for the delayed review.

Retries and buffers are provided by processors. This ensures that it's handled consistently and that limits are applied globally. See these docs:

https://github.com/open-telemetry/opentelemetry-collector/blob/master/processor/queuedprocessor
https://github.com/open-telemetry/opentelemetry-collector/blob/master/processor/memorylimiter
https://github.com/open-telemetry/opentelemetry-collector/blob/master/processor/batchprocessor

@bogdandrutu please correct me if wrong.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sending data to LogService with the producer is best practice, the producer will use different retry strategy according to the error code from the server. The max retry count and max buffer size are very important for the producer, so we expose these params.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@tigrannajaryan what are your thoughts? Is it a showstopper if retries and buffers are not being managed by the processors? I assume so given it defeats the purpose of having those things centrally managed but just verifying.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jrcamp To avoid conceptual conflicts, I have removed the producer and use a normal client instead. And we will use some processors to do retry/batch/memory limit. If we have more requirements for processors, I will submit some processor PRs.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great! One thing to keep in mind is that if you want to send things in parallel you'll still have to implement that part yourself. grep for NumWorkers to see current examples of it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually you may can just use https://github.com/open-telemetry/opentelemetry-collector/tree/master/processor/queuedprocessor. Can you see if that will work for your use cases?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jrcamp Thanks for reminding us, we can use queuedprocessor directly. Actually we will use the combination of memorylimiter&batchprocessor&queuedprocessor.

exporter/alibabacloudlogserviceexporter/trace_exporter.go Outdated Show resolved Hide resolved
@jrcamp
Copy link
Contributor

jrcamp commented Jun 5, 2020

Also please rebase since things have been delayed.

@shabicheng shabicheng requested a review from a team as a code owner June 6, 2020 09:47
@linux-foundation-easycla
Copy link

linux-foundation-easycla bot commented Jun 6, 2020

CLA Check
The committers are authorized under a signed CLA.

@shabicheng
Copy link
Contributor Author

On shutdown exporters should flush any internal queues to ensure all metrics/traces are sent before returning. Does the producer you're using have those and is there a way of waiting for them to finish if so?

Thanks for reminding, the producer can gracefully close and I've added shutdown function for the exporter.

@shabicheng
Copy link
Contributor Author

The load test is failed, but it doesn't seem to be related to this PR

@jrcamp
Copy link
Contributor

jrcamp commented Jun 10, 2020

lgtm, I think last thing to address is comment above about if you need to handle parallelism here or not.

Collector automation moved this from In progress to Reviewer approved Jun 11, 2020
@shabicheng
Copy link
Contributor Author

Now the target code coverage is 95.00%, we will add more tests to achieve the threshold.

@jrcamp
Copy link
Contributor

jrcamp commented Jun 11, 2020

@shabicheng is this ready for merge?

@tigrannajaryan
Copy link
Member

@shabicheng please run make gotidy to clean go.sum files.

@bogdandrutu bogdandrutu merged commit 5dabcfe into open-telemetry:master Jun 12, 2020
Collector automation moved this from Reviewer approved to Done Jun 12, 2020
wyTrivail referenced this pull request in mxiamxia/opentelemetry-collector-contrib Jul 13, 2020
* Add alibabacloud_logservice exporter

* refine code for ci

* refine code for review commends

* upgrade latest producer version

* trgger ci

* remove producer, use raw logservice client

* refine code and add more unittest
mxiamxia referenced this pull request in mxiamxia/opentelemetry-collector-contrib Jul 22, 2020
* INITIAL

* Properly implement name

* reword comment

* address comments and rename stop to shutdown

* Use assert in tests, change to shutdown instead of shutdownFunc and add some missing comments.
ljmsc referenced this pull request in ljmsc/opentelemetry-collector-contrib Feb 21, 2022
* export noop meter

* move mock meter to internal/metric package
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
No open projects
Collector
  
Done
Development

Successfully merging this pull request may close these issues.

None yet

4 participants