Skip to content

Comments

Support passing max_tokens and max_completion_tokens#173

Merged
stephaniegiang merged 8 commits intoactions:mainfrom
GitPaulo:main
Feb 24, 2026
Merged

Support passing max_tokens and max_completion_tokens#173
stephaniegiang merged 8 commits intoactions:mainfrom
GitPaulo:main

Conversation

@GitPaulo
Copy link
Contributor

@GitPaulo GitPaulo commented Feb 13, 2026

Description

With release of 2.0.6 the minstral models (just two) were broken as they seem to be the only ones that don't support max_completion_tokens

To fix this, i've implemented a proper deprecation layer which allows users to decide which input to pass, defaulting to old max_tokens

Issue: #172

Copilot AI review requested due to automatic review settings February 13, 2026 12:19
@GitPaulo GitPaulo requested a review from a team as a code owner February 13, 2026 12:19
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This pull request implements support for both max_tokens and max_completion_tokens parameters to fix compatibility issues with Mistral models introduced in version 2.0.6. The change introduces a deprecation layer that allows users to choose which parameter to pass, with the new max_completion_tokens taking precedence when provided.

Changes:

  • Added maxCompletionTokens parameter alongside the deprecated maxTokens parameter across type definitions and function signatures
  • Implemented parameter precedence logic where maxCompletionTokens takes priority over maxTokens when both are provided
  • Added new max-completion-tokens action input with empty default, while marking max-tokens as deprecated (keeping its '200' default)

Reviewed changes

Copilot reviewed 8 out of 10 changed files in this pull request and generated 3 comments.

Show a summary per file
File Description
src/prompt.ts Added maxCompletionTokens field to ModelParameters interface and marked maxTokens as deprecated
src/main.ts Implemented input parsing logic for both token parameters with precedence handling
src/inference.ts Added buildMaxTokensParam() helper function to route parameters correctly to OpenAI API
src/helpers.ts Updated buildInferenceRequest() signature to accept both token parameters
action.yml Added max-completion-tokens input and marked max-tokens as deprecated
dist/index.js Compiled JavaScript with all the above changes
tests/main.test.ts Updated test expectations to include maxCompletionTokens: undefined
tests/inference.test.ts Added basic test coverage for max_tokens parameter routing
tests/helpers-inference.test.ts Updated test expectations to include maxCompletionTokens: undefined

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

// Parse token limit inputs
const maxCompletionTokensInput =
promptConfig?.modelParameters?.maxCompletionTokens ?? core.getInput('max-completion-tokens')
const maxCompletionTokens = maxCompletionTokensInput ? Number(maxCompletionTokensInput) : undefined
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note: Realized Number() here works better than parseInt() because it is stricter

@stephaniegiang stephaniegiang merged commit e09e659 into actions:main Feb 24, 2026
6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants