Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature Request: Tracing support #179

Closed
tHBp opened this issue Jun 7, 2020 · 14 comments · Fixed by #341
Closed

Feature Request: Tracing support #179

tHBp opened this issue Jun 7, 2020 · 14 comments · Fixed by #341
Labels
enhancement New feature or request

Comments

@tHBp
Copy link
Contributor

tHBp commented Jun 7, 2020

Consider adding tracing support to enable checking runtime performance of queries and downstream API/Database calls.

Apollo supports the same via a plugin.

It can be either directly part of the core, or be implemented as a plugin. But for the latter, I guess we'd have to roll out a plugin architecture.

@mcollina
Copy link
Collaborator

mcollina commented Jun 7, 2020

Would you like to work on this? I think it would be a nice feature to add.

@tHBp
Copy link
Contributor Author

tHBp commented Jun 9, 2020

Sure, will try to complete a first revision over the weekend.

@xamgore
Copy link

xamgore commented Jul 25, 2020

Precious information may be found in the apollo-tracing repository.

@tHBp could you please share your work? Maybe I could continue it.

@pscarey
Copy link
Collaborator

pscarey commented Jul 30, 2020

@xamgore @tHBp

I'd like to flag that the only solution here isn't just Apollo's approach, but also the tracing could be implemented with OpenTelemetry. Adding an OpenTelemetry exporter would be a viable way of supporting many more tracing backends in a performant way.

Wrapping all resolvers in a Schema to achieve detailed tracing (by analysing the GraphQL info object which includes type, path), is viable, but can have large performance impacts. I have implemented this approach on certain APIs for advanced Audit Logging, so it's worth the hit, but ensuring performance is not overly impacted is key for tracing.

A viable approach here may be:

  • Wrapping and timing only the key schema/resolvers (i.e. where a resolve function has been provided).
  • Only tracing execution time a configurable percentage of the time.

It might be that timing every resolver is possible, but performance impacts should be kept in mind (Apollo is certainly able to achieve this in Apollo Engine / Graph Manager).

I can provide some additional info on how you might go about wrapping the schema if either of you is interested in this.

@mcollina
Copy link
Collaborator

I would actually prefer adding OpenTelemtry support.

@asciidisco asciidisco added the enhancement New feature or request label Aug 3, 2020
@PacoDu
Copy link
Contributor

PacoDu commented Oct 30, 2020

I think we will just need some documentation that explains how to use opentelemetry with mercurius. Some packages are about to be released for graphql instrumentation: @opentelemetry/instrumentation-graphql. I've not tested this package yet.

open-telemetry/opentelemetry-js-contrib#226

@PacoDu
Copy link
Contributor

PacoDu commented Oct 30, 2020

It works ! I had to clone opentelemetry repo to access @opentelemetry/instrumentation-graphql, but here is a simple tracer and it works pretty well with mercurius and federation :

'use strict'

const api = require('@opentelemetry/api')
const { NodeTracerProvider } = require('@opentelemetry/node')
const { SimpleSpanProcessor } = require('@opentelemetry/tracing')
const { ZipkinExporter } = require('@opentelemetry/exporter-zipkin')
const { GraphQLInstrumentation } = require('@opentelemetry/instrumentation-graphql')

module.exports = serviceName => {
  const provider = new NodeTracerProvider()
  const graphQLInstrumentation = new GraphQLInstrumentation()
  graphQLInstrumentation.setTracerProvider(provider)
  graphQLInstrumentation.enable()
  provider.addSpanProcessor(
    new SimpleSpanProcessor(
      new ZipkinExporter({
        serviceName
      })
    )
  )
  provider.register()
  return api.trace.getTracer(serviceName)
}

@mcollina
Copy link
Collaborator

Could we ship that as a small module with a few tests in this org?

@PacoDu
Copy link
Contributor

PacoDu commented Oct 31, 2020

Could we ship that as a small module with a few tests in this org?

I'm not sure this should be a module, from what I understand about opentelemetry, the tracer definition is for the end user. What we can do on our end is to add some mercurius specific span, maybe on loaders, cache, jit ?

https://open-telemetry.github.io/opentelemetry-js/#library-authors

@mcollina
Copy link
Collaborator

I think it would be awesome to wrap resolvers and loaders.

@PacoDu
Copy link
Contributor

PacoDu commented Oct 31, 2020

I think it would be awesome to wrap resolvers and loaders.

I think it needs more experimentation but the resolvers are already covered automatically via graphql-instrumentation, a parameters allows to configure the depth of the tracing

@mcollina
Copy link
Collaborator

I have a feeling it won't work for graphql-jit, but maybe I'm too pessimistic.

@PacoDu
Copy link
Contributor

PacoDu commented Oct 31, 2020

I have a feeling it won't work for graphql-jit, but maybe I'm too pessimistic.

Yes it wont.

About fastify and opentelemetry, the current landscape is a bit confusing because there is multiple initiative like https://github.com/autotelic/fastify-opentelemetry and https://github.com/mkinoshi/fastify-opentelemetry.

see open-telemetry/opentelemetry-js-contrib#41

fastify-opentelemetry doesn't seem up to date with the latest version of fastify (it uses deprecated hooks). I haven't tested autotelic version yet.

@mcollina Is there any plan to move this kind of integration under the fastify org ?

@mcollina
Copy link
Collaborator

https://github.com/autotelic/fastify-opentelemetry should do the trick.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants