-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add new Splunk HEC exporter #246
Add new Splunk HEC exporter #246
Conversation
exporter/splunkexporter/exporter.go
Outdated
client: &http.Client{ | ||
// TODO: What other settings of http.Client to expose via config? | ||
// Or what others change from default values? | ||
Timeout: 5 * time.Second, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See sapmexporter
as an example of exporter that would be good to mimic. We typically expose compression option, number of concurrent connections (if you plan to add support for concurrent connections to this exporter).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have taken what I could for now, but definitely more could be done with workers to send parallel payloads.
Co-authored-by: Tigran Najaryan <4194920+tigrannajaryan@users.noreply.github.com>
I signed it |
I have resolved some of the test issues, and am down to mapping summaries and distributions to data points that Splunk may consume. Metric points are usually single values with Splunk, from my understanding. We could additional metric points for each of the characteristics, similar to what the SignalFX exporter does. |
@jrcamp can you please review this one? |
return exporterhelper.NumTimeSeries(md), err | ||
} | ||
|
||
io.Copy(ioutil.Discard, resp.Body) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this still needed in recent versions of Go? I know in past versions it could cause problems if response body wasn't read but does it still?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know. I did find in the past that reading the body frees the TCP connection so it can be reused across requests, which helps performance.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like the Go code does it now: https://github.com/golang/go/blob/9d23975d89e6cc3df4f2156b2ae0df5d2cef16fb/src/net/http/transfer.go#L979
But then the net/http docs say...
// The http Client and Transport guarantee that Body is always
// non-nil, even on responses without a body or responses with
// a zero-length body. It is the caller's responsibility to
// close Body. The default HTTP client's Transport may not
// reuse HTTP/1.x "keep-alive" TCP connections if the Body is
// not read to completion and closed.
So I dunno. 🤷♂️
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some amount of cargo culting required?
Co-authored-by: Jay Camp <jay.r.camp@gmail.com>
Co-authored-by: Jay Camp <jay.r.camp@gmail.com>
Codecov Report
@@ Coverage Diff @@
## master #246 +/- ##
=======================================
Coverage 77.80% 77.80%
=======================================
Files 130 130
Lines 6686 6686
=======================================
Hits 5202 5202
Misses 1194 1194
Partials 290 290 Continue to review full report at Codecov.
|
exporter/splunkhecexporter/client.go
Outdated
func encodeBody(zippers *sync.Pool, dps []*splunkMetric) (bodyReader io.Reader, compressed bool, err error) { | ||
buf := new(bytes.Buffer) | ||
encoder := json.NewEncoder(buf) | ||
for _, e := range dps { | ||
err := encoder.Encode(e) | ||
if err != nil { | ||
return nil, false, err | ||
} | ||
buf.WriteString("\r\n\r\n") | ||
} | ||
return getReader(zippers, buf) | ||
} | ||
|
||
// avoid attempting to compress things that fit into a single ethernet frame | ||
func getReader(zippers *sync.Pool, b *bytes.Buffer) (io.Reader, bool, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could consider making this streaming based (see https://medium.com/stupid-gopher-tricks/streaming-data-in-go-without-buffering-3285ddd2a1e5 for example) to reduce peak memory usage.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@bogdandrutu are there any guidelines about preferring streaming vs. buffering?
@atoulme is this ready? I think if we want to make it stream based may want to add benchmarks so we can make some potential optimizations data-driven. I would be fine doing that in a separate PR. |
@jrcamp yes this is ready I think. I will start building an end to end example. |
@tigrannajaryan ready for final review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
@atoulme the code looks good, can you please add a README.md similar to what we have for other exporters? |
@tigrannajaryan added a README. |
Thanks @atoulme ! |
This implements metric exporter in Splunk HEC protocol.
… to avoid module problems) (#246) * automate building all the examples the EXAMPLES variable was out of date - the stackdriver example wasn't even built let's automate it, so we don't need to remember about updating the variable after adding a new example to the examples directory * move jaeger example to example directory this should be in the examples directory, so it can be built by the make test during CI. * switch to go 1.13 circle ci uses go 1.12 (which is the oldest 1.12 release) that contains some bugs with module handling let's switch to go 1.13.3, the latest go currently * use a single valid revision of the project in go.mod files this probably shouldn't be a problem since the switch to go 1.13 in circle ci, but cleans up the mess and the use of bogus releases
Changes: - Update dbapi instrumentation to use the SQL statement name as the span instead of the entire SQL query. - Renamed TracedCursor with CursorTracing. The class was not a valid Cursor so the name was confusing. - Updated CursorTracing's (previously TracedCursor) traced_execution method to accept the cursor instance as the first argument. This is required as for some dbapi implementations, we need a reference to the cursor in order to correctly format the SQL query. - Updated psycopg2 instrumentation to leverage dbapi's `cursor_factory` mechanism instead of wrapping the cursor with wrapt. This results in a simpler instrumentation without monkey patching objects at runtime and allows psycopg2's type registration system to work. This should make it possible to use psycopg2 instrumentation when using the JSONB feature or with frameworks like Django.
Description:
Add a new exporter for Splunk HEC.
Link to tracking Issue:
#243
Testing:
Unit tests, all passing with proper coverage.
Documentation:
WIP