Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Token Usage is wrong. Completion Token counted twice #645

Closed
niklasfink opened this issue Aug 31, 2023 · 1 comment
Closed

Token Usage is wrong. Completion Token counted twice #645

niklasfink opened this issue Aug 31, 2023 · 1 comment

Comments

@niklasfink
Copy link
Contributor

I've recently tested the LangChain token usage, which unfortunately doesn't work yet with streaming models, but while testing, I saw that the self-coded calculation in gpt-engineer probably has an error around here:

https://github.com/AntonOsika/gpt-engineer/blob/eebbe1bdcb10e6a572cedc67b3f7b8cad1973e54/gpt_engineer/ai.py#L174

Due to this line, the following calculation error occurs. Example:
Prompt Token Usage: 1000
Completion Token Usage: 500

gpt-engineer would calculate 1500 as Prompt Token, 500 as Completion Token and 2000 Total.

@niklasfink
Copy link
Contributor Author

response = self.llm(messages, callbacks=callsbacks)  # type: ignore

self.update_token_usage_log(messages=messages, answer=response.content, step_name=step_name)

messages.append(response)
logger.debug(f"Chat completion finished: {messages}")

return messages

This would fix it. Calculation is only off by -1 on gpt-engineer prompt side then.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants