Use model instead of config.model inside an LLM instance #1849
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Features
Fixed bug where model configurations interfere with each other between LLM instances: introduced self.model as an independent copy of config.model
Established new code standards: LLM internally uses self.model instead of self.config.model to avoid accidental modification of global configuration
Support for manually modifying specific LLM instance model names (e.g., role_zero_memory fixed to use 4o-mini for compression) without affecting other instances
Feature Docs
New usage standards:
self.model: Private model configuration for LLM instances, can be safely modified, only affects current instance
self.config.model: Global configuration, should not be directly modified to avoid affecting other LLM instances
Influence
Bug fix: Resolved the issue where modifying self.config.model caused unexpected changes to other LLM instance configurations
Code safety: Improved isolation between LLM instances, avoiding configuration pollution
Backward compatibility: Existing code logic remains largely unchanged, only internal implementation is more secure
Extensibility: Provides a safe implementation approach for model customization in specific scenarios
Result
All related unit tests passed:
✅ tests/metagpt/provider/test_base_llm.py
✅ tests/metagpt/provider/mock_llm_config.py
✅ tests/metagpt/provider/test_bedrock_api.py
✅ tests/metagpt/provider/test_human_provider.py
✅ tests/metagpt/provider/test_ollama_api.py
✅ tests/metagpt/provider/test_openai.py (tts related failures are related to oneapi, not caused by this change)
✅ tests/metagpt/provider/test_qianfan_api.py (TypeError related failures not caused by this change)
Other
This change is a preventive bug fix that primarily affects the internal implementation of the LLM provider layer, with no impact on external API calling methods. It is recommended to focus on code paths related to LLM instance creation and model configuration.