Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bug: python langchain openai, "Usage object must have either {input, output, total, unit} or {promptTokens, completionTokens, totalTokens}" #1956

Closed
marcklingen opened this issue May 2, 2024 Discussed in #1940 · 1 comment
Assignees
Labels

Comments

@marcklingen
Copy link
Member

Discussed in https://github.com/orgs/langfuse/discussions/1940

Originally posted by tzilkha May 1, 2024
I am running something very straight forward and I am seeing that I am not seeing the chat completion information on langfuse by running invoke/ainvoke/run etc, only if I stream. This, I don't think should be the case. I would appreciate any insight as to what I am doing wrong. I will provide reproducible code.

Environment:

langchain==0.1.6
langchain-community==0.0.19
langchain-core==0.1.22
langchain-experimental==0.0.50
langchain-openai==0.0.5
langchain-text-splitters==0.0.1
langchainhub==0.1.14
langfuse==2.26.3
langgraph==0.0.23
langsmith==0.0.87
openai==0.27.10

Here is the code I am running:

from langchain_community.chat_models import ChatOpenAI
from langchain_core.prompts.chat import (
    MessagesPlaceholder, 
    SystemMessagePromptTemplate, 
    PromptTemplate, 
    ChatPromptTemplate,
    HumanMessagePromptTemplate
)

from langfuse.callback import CallbackHandler

# langfuse_handler = CallbackHandler(
#     public_key='pk-lf-764d0c59-6f96-4a83-9800-f38729918fb3',
#     secret_key='sk-lf-3dd08db6-6bfb-4059-9a44-1c3d91ddc2ca',
#     host='http://localhost:3200'
# )

sum_prompt = ChatPromptTemplate(
    input_variables=['summaries', 'file_type', 'file_name'], 
    messages=[
        SystemMessagePromptTemplate(
            input_variables=['summaries', 'file_type', 'file_name'], 
            prompt=PromptTemplate(
                input_variables=['summaries', 'file_type', 'file_name'], 
                template='''
Provide an overall summary given a list of section summaries, of a {file_type} file with the name {file_name}.
- Only summarize content which is explicitly mentioned in the symmaries below.
- Do not speculate what other content the sections may contain.

SUMMARIES:
{summaries}

OVERALL SUMMARY
        '''
                                 )
        )
    ]
)
    
sum_llm = ChatOpenAI(model='gpt-4-turbo-preview', openai_api_key=OPENAI_API_KEY) 
sum_summarizer = sum_prompt | sum_llm


x = {
    'summaries': 'I love candy very much, expecially MNMs',
    'file_type': 'pdf',
    'file_name': 'candy.pdf'
}
y = await sum_summarizer.with_config({'callbacks': [langfuse_handler]}).ainvoke(x)

print(y)

The code runs and I am getting the output, but I get the following warning:

  File "/Users/tzilkha/miniforge3/lib/python3.11/site-packages/langfuse/utils/__init__.py", line 101, in _convert_usage_input
    raise ValueError(
ValueError: Usage object must have either {input, output, total, unit} or {promptTokens, completionTokens, totalTokens}

On langfuse,
image
We see that the token information and output from the llm is not being traced.

Where am I going wrong?

@hassiebp
Copy link
Contributor

hassiebp commented May 3, 2024

Hi @tzilkha - thanks a lot for your report! Unfortunately I cannot reproduce the issue even when installing the mentioned versions of langchain and langfuse.

  • I noticed you are on quite old langchain versions. Does the issue persist once you upgrade?
  • If not, could you please log out the usage that is passed to _convert_usage_input mentioned in the error message for your installed langfuse version? The path is mentioned in the error message

Thanks for your support!

@hassiebp hassiebp closed this as not planned Won't fix, can't repro, duplicate, stale May 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants