Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

馃悰 Bug Report: Requirements for Langchain example #1096

Open
1 task done
damianoneill opened this issue May 20, 2024 · 7 comments
Open
1 task done

馃悰 Bug Report: Requirements for Langchain example #1096

damianoneill opened this issue May 20, 2024 · 7 comments

Comments

@damianoneill
Copy link

damianoneill commented May 20, 2024

Which component is this bug for?

Langchain Instrumentation

馃摐 Description

When using the default pyproject.toml generated by

langchain app new ...

Langchain instrumentation does not occur.

馃憻 Reproduction steps

'''
langchain app new chat
'''

Results in the following pyproject.toml

[tool.poetry]
name = "chat"
version = "0.1.0"
description = ""
authors = ["Your Name <you@example.com>"]
readme = "README.md"
packages = [
    { include = "app" },
]

[tool.poetry.dependencies]
python = "^3.11"
uvicorn = "^0.23.2"
langserve = {extras = ["server"], version = ">=0.0.30"}
pydantic = "<2"


[tool.poetry.group.dev.dependencies]
langchain-cli = ">=0.0.15"

[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"

馃憤 Expected behavior

The docs should describe what else needs to be added to show full instrumentation for a langchain example.

馃憥 Actual Behavior with Screenshots

This results in only the LLM being traced.

image

馃 Python Version

3.11

馃搩 Provide any additional context for the Bug.

It looks like the following modules need to be added to get the following trace.

langchain
opentelemetry-instrumentation-fastapi

image

In the langchain example described here - #1043 langchain is explicitly added, and opentelemetry-instrumentation-fastapi is pulled in by chromadb.

馃憖 Have you spent some time to check if this bug has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

None

@damianoneill
Copy link
Author

@nirga let me know if you need anything else?

Also should I see the fastapi endpoint /invoke in the trace?

Thanks,
Damian.

@nirga
Copy link
Member

nirga commented May 20, 2024

Thanks @damianoneill! Will try to reproduce this.

For the fastapi - you'll need to add the FastAPI instrumentation. Are you using our SDK?

@damianoneill
Copy link
Author

Morning @nirga I'm not sure what you mean about the SDK, is there something other than below that I should be doing?

try:
    Traceloop.init(
        app_name="Langchain Chatbot Application",
        api_endpoint="http://localhost:4318",  # http endpoint for opentelemetry (jaeger) collector
        disable_batch=True,
    )
except Exception as e:  # pylint: disable=broad-except
    logger.error("Failed to initialize Traceloop: %s", e)

@nirga
Copy link
Member

nirga commented May 21, 2024

No, I meant that we don't instrument FastAPI currently, so you should do it yourself after initializing Traceloop. This is really easy:

import fastapi
from opentelemetry.instrumentation.fastapi import FastAPIInstrumentor
from traceloop.sdk import Traceloop

app = fastapi.FastAPI()

try:
    Traceloop.init(
        app_name="Langchain Chatbot Application",
        api_endpoint="http://localhost:4318",  # http endpoint for opentelemetry (jaeger) collector
        disable_batch=True,
    )
    FastAPIInstrumentor.instrument_app(app)
except Exception as e:  # pylint: disable=broad-except
    logger.error("Failed to initialize Traceloop: %s", e)

@app.get("/foobar")
async def foobar():
    return {"message": "hello world"}

@asaf
Copy link

asaf commented May 21, 2024

Hey @nirga ,

I started a local docker instance of jaegertracing/all-in-one:1.57 to check out openllmetry locally and I get Failed to export batch code: 404, reason: 404 page not found with the configuration suggested above.

I init Traceloop as you mentioned above:

Traceloop.init(
    app_name="Langchain Chatbot Application",
    api_endpoint="http://localhost:4318",
    disable_batch=True,
)

I also tried _api_endpoint="http://localhost:4318/v1/traces", same result.
(posting directly to Jaeger works)

I wonder if there's a way to debug Traceloop as im not sure it's sending the correct http call.

Best,
Asaf.

@nirga
Copy link
Member

nirga commented May 22, 2024

Hey @asaf, I'm going to delete this comment to avoid clutter in this issue as this is not the place to report new unrelated issues. I would love to assist you - you're welcome to send a message on slack, or open a separate issue / discussion here on GitHub.

Thanks!

@asaf
Copy link

asaf commented May 22, 2024

Thanks @nirga since this is not a feature / bug rather more of a question i'll convert this into a discussion - thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants