You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
feat: implement get_model TODO and fix critical telemetry bug (#647)
* feat: implement get_model TODO and fix critical telemetry bug
- Enhanced get_model() to use _get_models_for_provider for dynamic model discovery
- Integrated with existing dynamic fetching infrastructure
- Added proper fallback handling for unknown providers/models
- Improved parsing logic to handle provider/model formats correctly
- Fixed critical telemetry bug in OpenAI LLM where stripped model names caused warnings
- Changed _record_usage calls to pass full model name instead of base_model
- Resolves "Unknown model x-ai/grok-4-fast:free" warnings during evals
- Makes OpenAI implementation consistent with Anthropic
- Improved reasoning model detection using metadata instead of hardcoded checks
- Replaced _is_reasoner() function with model_meta.supports_reasoning
- Updated extra_body() to use ModelMeta parameter for better reasoning support
- Enhanced message preparation logic for reasoning models
- Added comprehensive test suite with 10 focused tests
- Tests static/dynamic model lookup, provider-only requests
- Validates error handling and fallback scenarios
- Covers new dynamic model fetching integration
All changes maintain backwards compatibility while significantly enhancing
model discovery capabilities and fixing evaluation warnings.
* Update gptme/llm/models.py
Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
---------
Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
0 commit comments