Skip to content

Conversation

@shreyashkgupta
Copy link
Collaborator

@shreyashkgupta shreyashkgupta commented May 29, 2024

Pull Request Description: 153 work summarizer token counter fix

Summary

This pull request addresses a bug fix in the token counter functionality within the kaizen/llms/provider.py file. The main purpose of these changes is to correct the way tokens are counted by the litellm.token_counter function, ensuring that the correct parameters are passed to it.

Significant Modifications

  1. Bug Fix in Token Counter: The available_tokens method in kaizen/llms/provider.py has been updated to pass the parameters to litellm.token_counter using keyword arguments (model=self.model, text=message) instead of positional arguments. This change ensures that the function receives the correct inputs, potentially fixing any issues related to incorrect token counting.

  2. Version Bump: The project version in pyproject.toml has been incremented from 0.1.13 to 0.1.14. This version bump indicates a minor update, likely reflecting the bug fix.

Impact

  • Error Handling: The change improves the reliability of the token counting mechanism, which is crucial for functionalities that depend on accurate token limits.
  • Maintainability: Using keyword arguments enhances code readability and reduces the risk of errors related to parameter ordering.
  • Versioning: The version increment helps in tracking the update and ensuring that users can identify the bug fix release.

Please review the changes and provide feedback or approval as necessary.

✨ Generated with love by Kaizen ❤️

Original Description None

@shreyashkgupta shreyashkgupta linked an issue May 29, 2024 that may be closed by this pull request
@cloudcodeai-nightly
Copy link

Code Review

✅ This is a good review! 👍

Here are some feedback:

Potential Issues

[important] -> The `available_tokens` method does not handle potential exceptions that might be raised by `litellm.get_max_tokens` or `litellm.token_counter`. This could lead to unhandled exceptions and potential crashes. **Fix:** Add error handling to manage potential exceptions from `litellm.get_max_tokens` and `litellm.token_counter`. For example, use try-except blocks to catch and handle specific exceptions. kaizen/llms/provider.py | 72 - 75

✨ Generated with love by Kaizen ❤️

1 similar comment
@cloudcodeai-nightly
Copy link

Code Review

✅ This is a good review! 👍

Here are some feedback:

Potential Issues

[important] -> The `available_tokens` method does not handle potential exceptions that might be raised by `litellm.get_max_tokens` or `litellm.token_counter`. This could lead to unhandled exceptions and potential crashes. **Fix:** Add error handling to manage potential exceptions from `litellm.get_max_tokens` and `litellm.token_counter`. For example, use try-except blocks to catch and handle specific exceptions. kaizen/llms/provider.py | 72 - 75

✨ Generated with love by Kaizen ❤️

@shreyashkgupta shreyashkgupta merged commit 373635c into main May 29, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Work summarizer token counter fix

2 participants