-
Notifications
You must be signed in to change notification settings - Fork 67
Docs for exporting telemetry in LangSmith + Example OTEL Collector configuration #852
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Docs for exporting telemetry in LangSmith + Example OTEL Collector configuration #852
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
|
||
## LangSmith Services | ||
The following LangSmith services expose metrics at an endpoint, in the Prometheus metrics format. | ||
- <b>Backend</b>: `http://<backend_service_name>.<namespace>.svc.cluster.local:1984/metrics` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
host-backend as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
and might we want to grab nginx metrics? i believe nginx has a way to export metrics also
## Redis | ||
If you are using the in-cluster Redis instance from the Helm chart, LangSmith can expose metrics for you if you upgrade the chart with the following values: | ||
```yaml | ||
redis: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
like i mentioned before, I think we should include the exporters in the separate monitoring chart? Ideally dont want to include another image that people are worried about inside our chart?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
alternative crazy thought, what if we run an adhoc queue job to export redis/pg metrics -> otel endpoint (push vs pull model)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sure let's move to standalone exporter instances as part of the monitoring chart for now.
The queue job seems like quite the task, can assess if people are against the exporter images and decide from there
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yea fair lets do that
First part of self hosted observability v1
Matching docs PR: langchain-ai/helm#353