Skip to content

Conversation

@edenreich
Copy link
Collaborator

Summary

This feature allow to limit the amount of generated tokens by the LLM request, making it more efficient for quick task that you just want the LLM to "think" less.

Signed-off-by: Eden Reich <eden.reich@gmail.com>
@edenreich edenreich changed the title docs: Add max_tokens parameter to OpenAPI specification feat: Add max_tokens option to generate_content Feb 10, 2025
…yClient

Signed-off-by: Eden Reich <eden.reich@gmail.com>
@github-actions
Copy link

🎉 This PR is included in version 0.9.0-rc.1 🎉

The release is available on GitHub release

Your semantic-release bot 📦🚀

@edenreich edenreich merged commit fc21cf2 into main Feb 11, 2025
4 checks passed
github-actions bot pushed a commit that referenced this pull request Feb 11, 2025
## [0.9.0](0.8.0...0.9.0) (2025-02-11)

### ✨ Features

* Add max_tokens option to generate_content ([#5](#5)) ([fc21cf2](fc21cf2))
@github-actions
Copy link

🎉 This PR is included in version 0.9.0 🎉

The release is available on GitHub release

Your semantic-release bot 📦🚀

@edenreich edenreich deleted the feature/implement-max-tokens-limit-as-option-for-tokens-generations branch February 11, 2025 11:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants