You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Originally posted by tzilkha May 1, 2024
I am running something very straight forward and I am seeing that I am not seeing the chat completion information on langfuse by running invoke/ainvoke/run etc, only if I stream. This, I don't think should be the case. I would appreciate any insight as to what I am doing wrong. I will provide reproducible code.
from langchain_community.chat_models import ChatOpenAI
from langchain_core.prompts.chat import (
MessagesPlaceholder,
SystemMessagePromptTemplate,
PromptTemplate,
ChatPromptTemplate,
HumanMessagePromptTemplate
)
from langfuse.callback import CallbackHandler
# langfuse_handler = CallbackHandler(
# public_key='pk-lf-764d0c59-6f96-4a83-9800-f38729918fb3',
# secret_key='sk-lf-3dd08db6-6bfb-4059-9a44-1c3d91ddc2ca',
# host='http://localhost:3200'
# )
sum_prompt = ChatPromptTemplate(
input_variables=['summaries', 'file_type', 'file_name'],
messages=[
SystemMessagePromptTemplate(
input_variables=['summaries', 'file_type', 'file_name'],
prompt=PromptTemplate(
input_variables=['summaries', 'file_type', 'file_name'],
template='''
Provide an overall summary given a list of section summaries, of a {file_type} file with the name {file_name}.
- Only summarize content which is explicitly mentioned in the symmaries below.
- Do not speculate what other content the sections may contain.
SUMMARIES:
{summaries}
OVERALL SUMMARY
'''
)
)
]
)
sum_llm = ChatOpenAI(model='gpt-4-turbo-preview', openai_api_key=OPENAI_API_KEY)
sum_summarizer = sum_prompt | sum_llm
x = {
'summaries': 'I love candy very much, expecially MNMs',
'file_type': 'pdf',
'file_name': 'candy.pdf'
}
y = await sum_summarizer.with_config({'callbacks': [langfuse_handler]}).ainvoke(x)
print(y)
The code runs and I am getting the output, but I get the following warning:
File "/Users/tzilkha/miniforge3/lib/python3.11/site-packages/langfuse/utils/__init__.py", line 101, in _convert_usage_input
raise ValueError(
ValueError: Usage object must have either {input, output, total, unit} or {promptTokens, completionTokens, totalTokens}
On langfuse,
We see that the token information and output from the llm is not being traced.
Where am I going wrong?
The text was updated successfully, but these errors were encountered:
Hi @tzilkha - thanks a lot for your report! Unfortunately I cannot reproduce the issue even when installing the mentioned versions of langchain and langfuse.
I noticed you are on quite old langchain versions. Does the issue persist once you upgrade?
If not, could you please log out the usage that is passed to _convert_usage_input mentioned in the error message for your installed langfuse version? The path is mentioned in the error message
Discussed in https://github.com/orgs/langfuse/discussions/1940
Originally posted by tzilkha May 1, 2024
I am running something very straight forward and I am seeing that I am not seeing the chat completion information on langfuse by running invoke/ainvoke/run etc, only if I stream. This, I don't think should be the case. I would appreciate any insight as to what I am doing wrong. I will provide reproducible code.
Environment:
Here is the code I am running:
The code runs and I am getting the output, but I get the following warning:
On langfuse,
![image](https://private-user-images.githubusercontent.com/34917377/327136651-515d389b-aa35-4ddc-a2ab-cc03e17e3e4f.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTgyNzUyOTMsIm5iZiI6MTcxODI3NDk5MywicGF0aCI6Ii8zNDkxNzM3Ny8zMjcxMzY2NTEtNTE1ZDM4OWItYWEzNS00ZGRjLWEyYWItY2MwM2UxN2UzZTRmLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA2MTMlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNjEzVDEwMzYzM1omWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWNjZTAzOTNjZDVmMWY0MmE5N2JiNjYzNWY0MzAwOWQ1MzUwYWI4NWU0MDQ0NGQxYzQ0ZTU0Njg5OTg1ZTk5NjUmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.CJCP8d2Z0OPjg42jLyx4EwBBkpAPdUgLJJgB4zhTf70)
We see that the token information and output from the llm is not being traced.
Where am I going wrong?
The text was updated successfully, but these errors were encountered: