feat(providers): add Mistral Chat API package and migrate capability to mixins#83
feat(providers): add Mistral Chat API package and migrate capability to mixins#83Kamilbenkirane merged 4 commits intomainfrom
Conversation
Add standalone provider package for Mistral Chat API with mixin pattern for capability-agnostic reuse. ## Client (MistralChatClient mixin) - HTTP POST/streaming to /v1/chat/completions endpoint - Usage parsing: prompt_tokens, completion_tokens, total_tokens - Content extraction from choices[0].message.content - Finish reason mapping (stop, length, etc.) - Structured output support via response_format ## Parameters - TemperatureMapper: temperature float [0.0-2.0] - MaxTokensMapper: max_tokens integer - OutputSchemaMapper: JSON schema via response_format - Supports single BaseModel and list[BaseModel] - Uses StrictRefResolvingJsonSchemaGenerator for schema generation ## Streaming (MistralChatStream mixin) - SSE event parsing for streaming chat completions - Text delta extraction from choices[0].delta.content - Finish reason and usage tracking in final events ## Config - API base URL: https://api.mistral.ai - Endpoint: /v1/chat/completions All clients follow the mixin pattern for reuse across capabilities.
Migrate Mistral text generation capability client to use provider package mixins, eliminating code duplication and centralizing API-specific logic. ## Changes - Mistral client now inherits from MistralChatClient mixin - Parameter mappers inherit from provider package mappers (TemperatureMapper, MaxTokensMapper, OutputSchemaMapper) - Streaming class inherits from MistralChatStream mixin - Remove duplicated HTTP request logic (_make_request, _make_stream_request) - Remove duplicated parameter mapping logic (~125 lines) - Remove duplicated streaming parsing logic (~60 lines) - Simplify _parse_usage to use mixin's implementation - Simplify _parse_content to use mixin's output parsing - Simplify _parse_finish_reason to use mixin's implementation ## Code Reduction - ~261 lines removed across client, parameters, and streaming files - Significant deduplication of HTTP request, parameter mapping, and streaming logic
Remove unused config.py file from Mistral capability provider. Configuration is now centralized in the provider package (celeste_mistral.chat.config) following the established pattern from OpenAI and other providers.
Add celeste-mistral to [tool.uv.sources] in text-generation capability package that now imports from the Mistral provider package after the mixin migration. This ensures workspace dependencies are properly declared for the refactored capability client that uses Mistral provider mixins.
PR Review: Mistral Chat API Provider Package & Mixin MigrationSummaryThis PR introduces a well-architected provider package for Mistral Chat API and successfully migrates the text generation capability to use mixins. The refactoring eliminates ~261 lines of duplicated code while establishing a reusable pattern for future capabilities. Overall, this is a solid implementation that follows the established OpenAI provider pattern. ✅ StrengthsArchitecture & Design
Code Quality
Migration Quality
🔍 Issues & Suggestions1. Missing Tests (High Priority)Issue: No test files are included for the new Impact:
Recommendation: Add unit tests for:
Example test structure: # packages/providers/mistral/tests/chat/test_client.py
async def test_make_request_sets_model_id():
"""Verify model ID is added to request body."""
...2. Config Enum Over-Engineering (Medium Priority)Location: Issue: The Code: class MistralChatEndpoint(StrEnum):
"""Endpoints for Chat API."""
CREATE_CHAT_COMPLETION = "/v1/chat/completions"
CREATE_FIM_COMPLETION = "/v1/fim/completions"
CREATE_AGENT_COMPLETION = "/v1/agents/completions"
# ... 23 more unused endpointsImpact:
Recommendation:
3. Unused Config Constants (Medium Priority)Location: Issue: Several config constants are defined but never used: CODESTRAL_HOST = "https://codestral.mistral.ai"
DEFAULT_CONTENT_TYPE = "application/json"
ACCEPT_HEADER = "application/json"
FILE_UPLOAD_CONTENT_TYPE = "multipart/form-data"
STREAMING_DELIMITER = "data: [DONE]"Recommendation: Remove unused constants or add a comment explaining they're reserved for future use. 4. Inconsistent Type Annotations (Low Priority)Location: Issue: def _parse_content(self, response_data: dict[str, Any]) -> Any:
"""Return choices from response."""
choices = response_data.get("choices", [])
if not choices:
msg = "No choices in response"
raise ValueError(msg)
return choicesRecommendation: Use 5. Magistral Models Hard-Coded Logic (Medium Priority)Location: Issue: Hard-coded handling for "magistral thinking models" that return list content: # Handle magistral thinking models that may return list content
if isinstance(content_delta, list):
text_parts = []
for block in content_delta:
if isinstance(block, dict) and block.get("type") == "text":
text_parts.append(block.get("text", ""))
content_delta = "".join(text_parts) if text_parts else NoneSame issue in: Concerns:
Recommendation:
6. ThinkingBudgetMapper Validation Concern (Medium Priority)Location: Issue: The mapper allows positive values ( def map(self, request: dict[str, Any], value: object, model: Model) -> dict[str, Any]:
"""Transform thinking_budget into provider request."""
validated_value = self._validate_value(value, model)
if validated_value is None:
return request
# Map unified values to Mistral's prompt_mode
if validated_value == -1:
request["prompt_mode"] = "reasoning"
elif validated_value == 0:
request["prompt_mode"] = None
else: # >0
request["prompt_mode"] = "reasoning"
return requestConcerns:
Recommendation: elif validated_value == 0:
request["prompt_mode"] = None
elif validated_value > 0:
# Mistral doesn't support budget control, so we enable reasoning mode
# and ignore the specific budget value
request["prompt_mode"] = "reasoning"
else:
raise ValueError(f"Invalid thinking_budget value: {validated_value}")7. Missing Validation (Low Priority)Location: Issue: if isinstance(content, str):
parsed = json.loads(content) # Could raise JSONDecodeError
else:
parsed = content
return TypeAdapter(value).validate_python(parsed)Recommendation: Add try/except with a more helpful error message: if isinstance(content, str):
try:
parsed = json.loads(content)
except json.JSONDecodeError as e:
raise ValueError(f"Failed to parse structured output as JSON: {e}") from e
else:
parsed = content8. Potential Issue with Empty Response (Medium Priority)Location: Issue: Usage parsing wraps parent usage in def _parse_usage(self, response_data: dict[str, Any]) -> TextGenerationUsage:
"""Parse usage from response."""
usage = super()._parse_usage(response_data)
return TextGenerationUsage(**usage)Concerns:
Recommendation: This is likely fine given the current implementation, but consider documenting the assumption. 🔒 SecurityNo significant security concerns identified. Minor notes:
📊 PerformanceNo performance concerns identified.
🧪 Test CoverageMajor Gap: No tests for the new Existing tests: The PR relies on existing text-generation capability tests, but these:
Recommendation: Add minimal test coverage:
📝 DocumentationMissing Documentation:
🎯 Recommendations PriorityMust Fix (Block Merge):
Should Fix (High Priority):
Nice to Have (Medium Priority):
Optional (Low Priority):
✨ ConclusionThis is a well-executed refactoring that successfully establishes the provider package pattern for Mistral. The code quality is good, the architecture is sound, and it eliminates significant duplication. The main blocker is the lack of tests for the new provider package. Once tests are added, this PR will be ready to merge. Score: 7.5/10 (would be 9/10 with tests) Great work on this refactoring! The mixin pattern is much cleaner than the previous approach. |
Add standalone provider package for Mistral Chat API with mixin pattern for capability-agnostic reuse. Migrate text generation capability client to use the new provider mixins, eliminating code duplication and centralizing API-specific logic.
Provider Package (celeste-mistral)
Chat API (MistralChatClient)
/v1/chat/completionsendpointprompt_tokens,completion_tokens,total_tokenschoices[0].message.contentstop,length, etc.)response_formatprompt_modeparameterParameters
response_formatStreaming (MistralChatStream)
choices[0].delta.contentConfig
https://api.mistral.ai/v1/chat/completionsAll clients follow the mixin pattern for reuse across capabilities.
Capability Migration
Text Generation
MistralChatClientmixinTemperatureMapper,MaxTokensMapper,OutputSchemaMapper)MistralChatStreammixinconfig.pyfile (config now in provider package)Other Changes
Fixes
celeste-mistralworkspace dependency to text-generationpyproject.tomlCode Statistics
Breaking Changes
None. All changes are internal refactorings that maintain the same public API.