-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Token Usage Tracking #85
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
teoh
commented
Dec 10, 2022
jerryjliu
reviewed
Dec 12, 2022
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for doing this! a few comments/questions
This was referenced Dec 19, 2022
viveksilimkhan1
pushed a commit
to viveksilimkhan1/llama_index
that referenced
this pull request
Oct 30, 2023
…pt-3.5-turbo` (run-llama#85) * update default recommended openai model from text-davinci-003 to gpt-3.5-turbo * fix unintended update in models list in README
14 tasks
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What is this?
From #56. This PR adds support for counting tokens used during calls to the LLM. This is done via the decorator
llm_token_counter()
that lives ingpt_index/utils.py
.At the moment, this decorator can only be used on class instance methods with a
_llm_predictor
attribute.e.g.
If you run
build_from_text()
, it will print the output in the form below:Why do we need this?
Calls to LLMs such as GPT3 cost money. For example, from OpenAI's pricing, the Davinci endpoint is $0.02 for every 1000 tokens.
Since gpt_index makes multiple LLM calls when building the index, it's handy to know how many tokens we're going through.
Remaining TODOs for this PR
_total_tokens_used
instance attribute to do its thingOther comments
Other implementations I considered
We might also miss token counts if you call the llm where we're not surrounding with token_start and end.
For the future
Known issues:
Sometimes the token count is off by a few. See this issue for an example: openai/openai-python#150