-
Notifications
You must be signed in to change notification settings - Fork 6
Open
Description
CodeRabbit
Consider making maxTokens configurable based on model capabilities
The hardcoded maxTokens: 4096 might not be suitable for all models. Some models support more, while others support less.
+ // Get model-specific max tokens or use a reasonable default
+ const modelMaxTokens = modelInstance.maxOutputTokens || 4096;
+ const maxTokens = Math.min(modelMaxTokens, 4096); // Cap at 4096 for code generation
+
const result = await AIService.generateCompletion({
model: modelInstance,
system: systemPrompt,
messages: [{ role: "user", content: userPrompt }],
temperature: 0.1,
- maxTokens: 4096,
+ maxTokens: maxTokens,
});
settings... i might have to ask suno for something...
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels