-
Notifications
You must be signed in to change notification settings - Fork 211
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Distributed Tracing with APM server; with Python #712
Comments
The agent actually doesn't do anything with context that you set -- For distributed tracing, the standard way to combine traces across different services is via headers. Generally, you shouldn't have to worry about this, as we do it automatically. For example, in Python, we instrument all the major request libraries (such as Additionally, when the python agent receives a request it looks for those headers and uses them to tie to the parent transaction. Part of the problem may be that it sounds like you're instantiating your own Tracer objects. This isn't the standard way to use our agent. If you're using a supported framework then check out the documentation for that framework to get the agent set up. But even if you aren't using a supported framework, the established way to manually create transactions is documented here. Keep me posted if you have more questions. And welcome to the community! |
Thank you. I tried for frameworks as well, but here was trying the one without. I ran again the test. It consists of a service that makes a GET requests to two different endpoints, these two services make another GET request to another service that returns 'hello world'. Something like this: When I check the traces in Kibana, this is what I see: Note: I named all services the same, so now the traces appear separately, under If I click on As you can see the two requests appear there, but not the final request. In order to see the final request, I have to select one of the So, what's the way to see all the spans under the same trace, under |
@nerusnayleinad can you try to use different service names for the three (or four, not sure what the difference between 2a and 2b is)? Also, I suggest to use the same transaction type |
there are no differences between 2a and 2b. These are just mock services, that receive a request, give it some delay and make another. With different service names, I get all the requests in separate services. the view is the same as before when accessing each service. With same service name for all and same transaction type, the last one overwrites all traces, so I only see the last request. |
Can you give us code snippets for how you're instrumenting each service manually? For example, if you didn't call Additionally, what library is making the call from 2a/2b to 3? We need to make sure it's in the supported list. It looks like the distributed tracing is working on 1 -> 2a/2b, just not 2a/2b -> 3. |
@basepi sure. These are the scripts of all 4 services: service 1: (this is the same service you advised to use, from the examples.)
service 2a:
service 2b:
service 3:
I think I have to do something with that |
Alright, I see the disconnect. I have a working example, modified from your example above, in this gist The problem was that you didn't quite have the instrumentation for service 2a/b and 3 correct. When you were looking at the trace, all you were seeing were the transaction and two spans (for the two network calls) from service 1. This is because our instrumentation for flask requires us to connect to flask's signals, which we only do if you set up our flask integration, as documented here. Otherwise the flask routing doesn't get instrumented, which means that while our headers are there from service 1, the agent doesn't know to look for them. In order to create a transaction that actually uses incoming http headers, you have to use Luckily, if you use our official integrations, we do all that hard work for you! This is the waterfall I see when I run the example in my gist: Much better! Please keep me posted if anything I explained wasn't clear. We're here to help! |
Oh. Yes, much better. So Thank you very much. |
To get this complete, do you have any examples on how to do this without any framework? with pure python? |
Spans are sub-pieces of transactions. The reason I used
It would look similar to your original example, except that you need to create a TraceParent object. I linked to the flask code and it's going to look similar to that:
Note that there are a few different helpers in the TraceParent class that help with building these objects. For example, if you were using a message bus such that you didn't have the concept of HTTP headers, you could use |
It looks like all questions have been addressed. I'll close this for now :) |
I've been doing POC tests on different Tracing technologies (Jaeger, Zipkin, Stackdriver trace, Istio (still Jaeger or Zipkin. Different concepts though)), and now I am with Elasticsearch APM module, and I see it is more or less the same concept. You start counting a trace when you start a request, and end it when you get a response.
I've generated some traces, and am able to see them on Kibana, but I see the traces separately, which makes sense, as every time; in each service I initialize a new Tracer object, and this gets a new ID.
Now, when I want to see the cascade view of several service spans, or several spans of the same service, I pass the Trace ID to the next service, so this one will initialize the tracer with this ID, and will generate a new Span ID, and attach to the same trace.
I've been reading the docs, for Python, and the only method that suits this is
elasticapm.set_context()
, but everything I found in the docs is this:I would like to know if this is the right way of doing this, or I am completely off the track.
The text was updated successfully, but these errors were encountered: