Support passing max_tokens and max_completion_tokens#173
Merged
stephaniegiang merged 8 commits intoactions:mainfrom Feb 24, 2026
Merged
Support passing max_tokens and max_completion_tokens#173stephaniegiang merged 8 commits intoactions:mainfrom
stephaniegiang merged 8 commits intoactions:mainfrom
Conversation
Contributor
There was a problem hiding this comment.
Pull request overview
This pull request implements support for both max_tokens and max_completion_tokens parameters to fix compatibility issues with Mistral models introduced in version 2.0.6. The change introduces a deprecation layer that allows users to choose which parameter to pass, with the new max_completion_tokens taking precedence when provided.
Changes:
- Added
maxCompletionTokensparameter alongside the deprecatedmaxTokensparameter across type definitions and function signatures - Implemented parameter precedence logic where
maxCompletionTokenstakes priority overmaxTokenswhen both are provided - Added new
max-completion-tokensaction input with empty default, while markingmax-tokensas deprecated (keeping its '200' default)
Reviewed changes
Copilot reviewed 8 out of 10 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
| src/prompt.ts | Added maxCompletionTokens field to ModelParameters interface and marked maxTokens as deprecated |
| src/main.ts | Implemented input parsing logic for both token parameters with precedence handling |
| src/inference.ts | Added buildMaxTokensParam() helper function to route parameters correctly to OpenAI API |
| src/helpers.ts | Updated buildInferenceRequest() signature to accept both token parameters |
| action.yml | Added max-completion-tokens input and marked max-tokens as deprecated |
| dist/index.js | Compiled JavaScript with all the above changes |
| tests/main.test.ts | Updated test expectations to include maxCompletionTokens: undefined |
| tests/inference.test.ts | Added basic test coverage for max_tokens parameter routing |
| tests/helpers-inference.test.ts | Updated test expectations to include maxCompletionTokens: undefined |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
GitPaulo
commented
Feb 13, 2026
| // Parse token limit inputs | ||
| const maxCompletionTokensInput = | ||
| promptConfig?.modelParameters?.maxCompletionTokens ?? core.getInput('max-completion-tokens') | ||
| const maxCompletionTokens = maxCompletionTokensInput ? Number(maxCompletionTokensInput) : undefined |
Contributor
Author
There was a problem hiding this comment.
Note: Realized Number() here works better than parseInt() because it is stricter
stephaniegiang
approved these changes
Feb 24, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description
With release of
2.0.6the minstral models (just two) were broken as they seem to be the only ones that don't supportmax_completion_tokensTo fix this, i've implemented a proper deprecation layer which allows users to decide which input to pass, defaulting to old
max_tokensIssue: #172