Skip to content

Adding support for --token-usage/-t flag #8

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 7 commits into from
Sep 20, 2024

Conversation

arilloid
Copy link
Contributor

Fixes #7

Thank you for making the process easier by adding the code for parsing the --token-usage/-t option flag argument from the command line in codeMage.py!

Implementation

I started implementing the feature by adding an if-statement that checks if the option flag was provided to translator.py. At first, if condition was met, my code was simply printing out the token information (that has been extracted from the completion object) to stdout, however, after the first round of testing, I noticed that the token amounts are always equal to 0.

image

After printing out the completion object, I've realized that completion token details are not being returned by the chosen model (completion_token_details=None).

image

That is why I added another check to see if completion token details are present in the response. Now, if details about the token usage are present, they get printed to stderr, else - the users will see a message saying that model didn't give any token usage details.

image

Examples

Running the command with a model that provides token usage details (openai/gpt-3.5-turbo)

image

Running the command with a model that doesn't

image

To take advantage of the token usage feature, you can consider allowing users to specify the model of choice, or switching to a model that gives token usage details in its responses.

Changes

Apart from adding the code block with if-statements, I've imported sys at the top of translator.py for printing to stderr, added the info about the new option at the bottom of the README.md.

image

and updated the help info for flag option in codeMage.py

image

@gitdevjin
Copy link
Owner

Instead of using print(), I think you need to use sys.stderr.write("This is an stderr")

@arilloid
Copy link
Contributor Author

I have changed all of the print() statements to sys.stderr.write()

image

@gitdevjin gitdevjin merged commit 54cb2f7 into gitdevjin:main Sep 20, 2024
@gitdevjin
Copy link
Owner

Thank you so much your contirubution. I merged it to my main branch. It looks really good.
And Yes, I checked that my LLM API for some reason doesn't support the token usage.
I will probably implement the support for other models

@gitdevjin gitdevjin self-requested a review October 4, 2024 03:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Adding a feature: token info flag option
2 participants