Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add new Splunk HEC exporter #246

Merged
merged 36 commits into from
May 28, 2020

Conversation

atoulme
Copy link
Contributor

@atoulme atoulme commented May 19, 2020

Description:
Add a new exporter for Splunk HEC.
Link to tracking Issue:
#243

Testing:
Unit tests, all passing with proper coverage.

Documentation:
WIP

exporter/splunkexporter/client.go Outdated Show resolved Hide resolved
exporter/splunkexporter/config.go Outdated Show resolved Hide resolved
exporter/splunkexporter/config.go Outdated Show resolved Hide resolved
exporter/splunkexporter/config.go Outdated Show resolved Hide resolved
exporter/splunkexporter/factory.go Outdated Show resolved Hide resolved
exporter/splunkexporter/metricdata_to_splunk.go Outdated Show resolved Hide resolved
exporter/splunkexporter/metricdata_to_splunk.go Outdated Show resolved Hide resolved
client: &http.Client{
// TODO: What other settings of http.Client to expose via config?
// Or what others change from default values?
Timeout: 5 * time.Second,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See sapmexporter as an example of exporter that would be good to mimic. We typically expose compression option, number of concurrent connections (if you plan to add support for concurrent connections to this exporter).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have taken what I could for now, but definitely more could be done with workers to send parallel payloads.

exporter/splunkexporter/exporter.go Outdated Show resolved Hide resolved
exporter/splunkexporter/metricdata_to_splunk.go Outdated Show resolved Hide resolved
@tigrannajaryan tigrannajaryan added this to In progress in Collector May 19, 2020
@atoulme
Copy link
Contributor Author

atoulme commented May 20, 2020

I signed it

@atoulme
Copy link
Contributor Author

atoulme commented May 20, 2020

I have resolved some of the test issues, and am down to mapping summaries and distributions to data points that Splunk may consume. Metric points are usually single values with Splunk, from my understanding. We could additional metric points for each of the characteristics, similar to what the SignalFX exporter does.
Looking to your guidance on this before I proceed.

@tigrannajaryan tigrannajaryan self-assigned this May 20, 2020
@tigrannajaryan
Copy link
Member

@jrcamp can you please review this one?

exporter/splunkhecexporter/config.go Outdated Show resolved Hide resolved
exporter/splunkhecexporter/config.go Outdated Show resolved Hide resolved
exporter/splunkhecexporter/exporter.go Outdated Show resolved Hide resolved
exporter/splunkhecexporter/exporter.go Outdated Show resolved Hide resolved
exporter/splunkhecexporter/factory_test.go Outdated Show resolved Hide resolved
exporter/splunkhecexporter/metricdata_to_splunk.go Outdated Show resolved Hide resolved
return exporterhelper.NumTimeSeries(md), err
}

io.Copy(ioutil.Discard, resp.Body)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this still needed in recent versions of Go? I know in past versions it could cause problems if response body wasn't read but does it still?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know. I did find in the past that reading the body frees the TCP connection so it can be reused across requests, which helps performance.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like the Go code does it now: https://github.com/golang/go/blob/9d23975d89e6cc3df4f2156b2ae0df5d2cef16fb/src/net/http/transfer.go#L979

But then the net/http docs say...

    // The http Client and Transport guarantee that Body is always
    // non-nil, even on responses without a body or responses with
    // a zero-length body. It is the caller's responsibility to
    // close Body. The default HTTP client's Transport may not
    // reuse HTTP/1.x "keep-alive" TCP connections if the Body is
    // not read to completion and closed.

So I dunno. 🤷‍♂️

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some amount of cargo culting required?

exporter/splunkhecexporter/client.go Outdated Show resolved Hide resolved
exporter/splunkhecexporter/client.go Outdated Show resolved Hide resolved
exporter/splunkhecexporter/config.go Outdated Show resolved Hide resolved
Collector automation moved this from In progress to Review in progress May 21, 2020
atoulme and others added 3 commits May 21, 2020 13:44
Co-authored-by: Jay Camp <jay.r.camp@gmail.com>
Co-authored-by: Jay Camp <jay.r.camp@gmail.com>
@codecov
Copy link

codecov bot commented May 23, 2020

Codecov Report

Merging #246 into master will not change coverage.
The diff coverage is n/a.

Impacted file tree graph

@@           Coverage Diff           @@
##           master     #246   +/-   ##
=======================================
  Coverage   77.80%   77.80%           
=======================================
  Files         130      130           
  Lines        6686     6686           
=======================================
  Hits         5202     5202           
  Misses       1194     1194           
  Partials      290      290           

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 47f13c8...47f13c8. Read the comment docs.

@atoulme atoulme marked this pull request as ready for review May 23, 2020 00:24
@atoulme atoulme requested a review from a team as a code owner May 23, 2020 00:24
Comment on lines 96 to 110
func encodeBody(zippers *sync.Pool, dps []*splunkMetric) (bodyReader io.Reader, compressed bool, err error) {
buf := new(bytes.Buffer)
encoder := json.NewEncoder(buf)
for _, e := range dps {
err := encoder.Encode(e)
if err != nil {
return nil, false, err
}
buf.WriteString("\r\n\r\n")
}
return getReader(zippers, buf)
}

// avoid attempting to compress things that fit into a single ethernet frame
func getReader(zippers *sync.Pool, b *bytes.Buffer) (io.Reader, bool, error) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could consider making this streaming based (see https://medium.com/stupid-gopher-tricks/streaming-data-in-go-without-buffering-3285ddd2a1e5 for example) to reduce peak memory usage.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bogdandrutu are there any guidelines about preferring streaming vs. buffering?

@jrcamp
Copy link
Contributor

jrcamp commented May 26, 2020

@atoulme is this ready? I think if we want to make it stream based may want to add benchmarks so we can make some potential optimizations data-driven. I would be fine doing that in a separate PR.

@atoulme
Copy link
Contributor Author

atoulme commented May 26, 2020

@jrcamp yes this is ready I think.

I will start building an end to end example.

@jrcamp
Copy link
Contributor

jrcamp commented May 26, 2020

@tigrannajaryan ready for final review

Copy link
Member

@tigrannajaryan tigrannajaryan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Collector automation moved this from Review in progress to Reviewer approved May 26, 2020
@tigrannajaryan
Copy link
Member

@atoulme the code looks good, can you please add a README.md similar to what we have for other exporters?

@atoulme
Copy link
Contributor Author

atoulme commented May 26, 2020

@tigrannajaryan added a README.

@tigrannajaryan tigrannajaryan merged commit 82b1adf into open-telemetry:master May 28, 2020
Collector automation moved this from Reviewer approved to Done May 28, 2020
@tigrannajaryan
Copy link
Member

Thanks @atoulme !

atoulme added a commit to atoulme/opentelemetry-collector-contrib that referenced this pull request Jun 5, 2020
bogdandrutu pushed a commit that referenced this pull request Jun 6, 2020
… HEC exporter (#255)

* Bring changes to signalfx exporter matching code review comments of #246

* Add test checking for invalid distribution without buckets
wyTrivail referenced this pull request in mxiamxia/opentelemetry-collector-contrib Jul 13, 2020
This implements metric exporter in Splunk HEC protocol.
wyTrivail referenced this pull request in mxiamxia/opentelemetry-collector-contrib Jul 13, 2020
… HEC exporter (#255)

* Bring changes to signalfx exporter matching code review comments of #246

* Add test checking for invalid distribution without buckets
ljmsc referenced this pull request in ljmsc/opentelemetry-collector-contrib Feb 21, 2022
… to avoid module problems) (#246)

* automate building all the examples

the EXAMPLES variable was out of date - the stackdriver example wasn't
even built

let's automate it, so we don't need to remember about updating the
variable after adding a new example to the examples directory

* move jaeger example to example directory

this should be in the examples directory, so it can be built by the
make test during CI.

* switch to go 1.13

circle ci uses go 1.12 (which is the oldest 1.12 release) that
contains some bugs with module handling

let's switch to go 1.13.3, the latest go currently

* use a single valid revision of the project in go.mod files

this probably shouldn't be a problem since the switch to go 1.13 in
circle ci, but cleans up the mess and the use of bogus releases
codeboten pushed a commit that referenced this pull request Nov 23, 2022
Changes:

- Update dbapi instrumentation to use the SQL statement name as the span
instead of the entire SQL query.
- Renamed TracedCursor with CursorTracing. The class was not a valid
Cursor so the name was confusing.
- Updated CursorTracing's (previously TracedCursor) traced_execution
method to accept the cursor instance as the first argument. This is
required as for some dbapi implementations, we need a reference to the
cursor in order to correctly format the SQL query.
- Updated psycopg2 instrumentation to leverage dbapi's `cursor_factory`
mechanism instead of wrapping the cursor with wrapt. This results in a
simpler instrumentation without monkey patching objects at runtime and
allows psycopg2's type registration system to work. This should make it
possible to use psycopg2 instrumentation when using the JSONB feature or
with frameworks like Django.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
No open projects
Collector
  
Done
Development

Successfully merging this pull request may close these issues.

None yet

3 participants