Skip to content

Conversation

@wineee
Copy link
Contributor

@wineee wineee commented Aug 6, 2025

  1. Added max_tokens field to all translator structs (OpenAI, Claude, Gemini, Grok, Qwen)
  2. Integrated max_tokens configuration from config file with default fallback (2048)
  3. Added max_tokens parameter to API requests for all translation services
  4. Replaced hardcoded max_tokens value in Qwen implementation with configurable value

This change allows better control over token usage and response length across all AI translation services. The configurable max_tokens parameter enables:

  • Cost optimization by limiting token consumption
  • Better response time management
  • Consistent behavior across different translation models
  • More flexible configuration based on use case requirements

1. Added max_tokens field to all translator structs (OpenAI, Claude,
Gemini, Grok, Qwen)
2. Integrated max_tokens configuration from config file with default
fallback (2048)
3. Added max_tokens parameter to API requests for all translation
services
4. Replaced hardcoded max_tokens value in Qwen implementation with
configurable value

This change allows better control over token usage and response
length across all AI translation services. The configurable max_tokens
parameter enables:
- Cost optimization by limiting token consumption
- Better response time management
- Consistent behavior across different translation models
- More flexible configuration based on use case requirements
@wineee wineee marked this pull request as ready for review August 8, 2025 19:31
@zccrs zccrs requested a review from Copilot August 11, 2025 02:59
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR adds configurable max_tokens support across all AI translation services to enable better control over token usage and response management. The implementation removes hardcoded token limits and provides consistent configuration across different AI providers.

  • Adds max_tokens field to all translator structs (OpenAI, Claude, Gemini, Grok, Qwen)
  • Integrates configuration loading with 2048 token default fallback
  • Replaces hardcoded values with configurable parameters in API requests

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
@zccrs zccrs merged commit f6b186f into zccrs:master Aug 11, 2025
10 checks passed
@zccrs
Copy link
Owner

zccrs commented Aug 11, 2025

Thanks

@wineee wineee deleted the max branch August 11, 2025 05:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants