Adding support for --token-usage/-t flag #8
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Fixes #7
Thank you for making the process easier by adding the code for parsing the
--token-usage/-t
option flag argument from the command line incodeMage.py
!Implementation
I started implementing the feature by adding an if-statement that checks if the option flag was provided to
translator.py
. At first, if condition was met, my code was simply printing out the token information (that has been extracted from thecompletion
object) tostdout
, however, after the first round of testing, I noticed that the token amounts are always equal to 0.After printing out the
completion
object, I've realized that completion token details are not being returned by the chosen model (completion_token_details=None
).That is why I added another check to see if completion token details are present in the response. Now, if details about the token usage are present, they get printed to
stderr
, else - the users will see a message saying that model didn't give any token usage details.Examples
Running the command with a model that provides token usage details (openai/gpt-3.5-turbo)
Running the command with a model that doesn't
To take advantage of the token usage feature, you can consider allowing users to specify the model of choice, or switching to a model that gives token usage details in its responses.
Changes
Apart from adding the code block with if-statements, I've imported
sys
at the top oftranslator.py
for printing tostderr
, added the info about the new option at the bottom of the README.md.and updated the help info for flag option in
codeMage.py