Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add tests for #920. "brave.fush" annotation should not be reported for finished spans #921

Merged
merged 3 commits into from Jun 13, 2019

Conversation

lambcode
Copy link
Contributor

These tests are currently failing, but show same bug as described in #920. See issue for more details.

@codefromthecrypt
Copy link
Member

thanks will be looking at this shortly and next release will address whatever we can without breaking something else :)

* Must be a separate method from the test method to allow for local variables to be garbage
* collected
*/
private static void simulateInProcessPropagation(Tracer tracer1, Tracer tracer2) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is an odd use case. the tracer (tracing etc) is a singleton and meant to be used as such.

In-process propagation involves moving the Span or TraceContext across threads, not switching tracer instances per thread.

Can you elaborate why you are doing this? It might cause problems other than the scenario noted here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was not aware that the tracer/tracing was meant to be a singleton. The API lets one create different instances via Tracing.newBuilder(). Is the intent that this builder is used only once per application? If so, why does the API allow you to create multiple instances?

As a little bit of background, we have different tracing instances each with a different serviceName. As an example lets say I have webservice Aws and webservice Bws. Aws requests information from Bws through a library Bclient that makes http calls to Bws. The Bclient library uses a tracing instance with serviceName "B" whereas the Aws has a tracing instance with serviceName "A". A trace will start in Aws with a span that has service name "A". This service will eventually call into the Bclient library code which will create child spans (via a different tracer) with serviceName "B".

We've actually had some discussions internally recently on whether or not we should be doing this, so some guidance on when different tracing instances should be used would be appreciated.

As for the in-process propagation, I think you might be confused that the test does not actually use multiple threads. I found that this is not strictly necessary because the behavior is actually caused by using multiple tracing instances and scoped spans. I just found the in-process propagation example to be an easy one to understand.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will try to answer your questions:

Q: why let people make instances if should be a singleton?
A: many frameworks handle singletons differently, and there are classloader concerns, too. brave does not constrain you to using spring, guice, or otherwise singleton management.

Q: should we be using different instances per endpoint in order to control the service name?
A: this is complex and likely more fine grained than most use. For most sites endpoints are actually span names. We have a tool called FinishedSpanHandler which has the concept of localRoot which would allow you to rewrite data consistently per fraction of a trace. Again, this is more complex that what most sites do. if you want to discuss that further better use chat as it is less spammy to watchers https://gitter.im/openzipkin/zipkin

On the test talking about threads then not using them etc. I personally found this confusing and it took me a few hours to guess what was supposed to be checked. (think time not pure work). In general, being precise like your diagram is best as when comments lie they are distracting to current and future maintainers. Your tests helped, just it mentioned things that wouldn't have been tested in that code. thx regardless!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the answers! This is super helpful. And sorry for any confusion those tests caused.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants