Skip to content

fix: Change the max_token of the Qianfan large model to max_output_token #2955

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Apr 22, 2025

Conversation

shaohuzhang1
Copy link
Contributor

fix: Change the max_token of the Qianfan large model to max_output_token

Copy link

f2c-ci-robot bot commented Apr 22, 2025

Adding the "do-not-merge/release-note-label-needed" label because no release-note block was detected, please follow our release note process to remove it.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Copy link

f2c-ci-robot bot commented Apr 22, 2025

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@@ -27,7 +27,7 @@ class WenxinLLMModelParams(BaseForm):
_step=0.01,
precision=2)

max_tokens = forms.SliderField(
max_output_tokens = forms.SliderField(
TooltipLabel(_('Output the maximum Tokens'),
_('Specify the maximum number of tokens that the model can generate')),
required=True, default_value=1024,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The proposed change from max_tokens to max_output_tokens is valid and aligns with the current naming conventions in many APIs, which often use "output" at the end of parameter names to indicate they refer to what an API generates rather than receiving. This makes sense given the context of LLM (Language Model) parameters, especially since models typically output tokens or sentences.

Potential Improvements:

  1. Documentation Clarity: Consider adding more detailed documentation for both fields (max_output_tokens and potentially _step, precision) about their intended usage and expected values. This will help users better understand how these parameters affect the behavior of your application.

  2. Validation: Ensure that the validation logic remains appropriate for both fields:

    • For max_output_tokens, set a reasonable upper limit based on typical text generation tasks.
    • Validate _step to ensure it meets expectations (e.g., non-negative numbers).
    • Validate precision similarly, ensuring it's within acceptable bounds.
  3. Consistent Naming: If there are other slider fields involving token limits (like max_input_tokens), ensure consistent naming across all similar fields to maintain clarity and ease of use.

Overall, this refactoring appears to be well-intentioned and should work correctly as long as the existing codebase and business requirements are adhered to.

@shaohuzhang1 shaohuzhang1 merged commit d6f1d25 into main Apr 22, 2025
4 of 5 checks passed
@shaohuzhang1 shaohuzhang1 deleted the pr@main@fix_qianfan_model branch April 22, 2025 10:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant