-
Notifications
You must be signed in to change notification settings - Fork 171
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding HTTP transport module #23
Conversation
I ticked the version number in the |
} | ||
}).then((response) => { | ||
if (response.status === 202) { | ||
this.queue.length = 0; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you think about truncating the queue in every case, regardless of whether the request succeeds or not? We don't require consistency in Zipkin trace data. My concern is that if the zipkin server is down, the posting would fail, and this.queue would never be emptied, leading to a memory leak.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good call. I'll take care of this and the other items today. Thanks!
Great work! The lerna thing was my mistake, lerna didn't publish the project properly last time I tried. I'll fix it when I release :) |
👏
|
this.endpoint = endpoint; | ||
this.queue = []; | ||
|
||
setInterval(() => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe unref the timer? https://nodejs.org/api/timers.html#timers_timeout_unref
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a good point! I'll make the change.
👍 lgtm |
All feedback has been addressed, and it looks like the travis build is back so 👍 |
Thanks a lot! |
Fixes #15. I modeled most of this after the scribe and kafka loggers. Basically, there is just an internal queue that gets checked every second for new spans. If there are spans, it batches them up in a single JSON post (an array) and sends them to a configurable HTTP endpoint.
Let me know if you'd like any changes made! I've tested this on a Restify-based service and I also added unit / integration tests.