-
Notifications
You must be signed in to change notification settings - Fork 650
Closed
Description
PR #231 added the support for reasoning_effort as a possible kwarg for custom models. Unless I am mistaken, however, the LLMConfig dataclass (https://github.com/codelion/openevolve/blob/139cbc7c3270327775b0811715236fea069375ab/openevolve/config.py#L44) does not support actually adding reasoning_effort anywhere. If you include reasoning_effort in your config, openevolve fails with the following error:
env/lib/python3.10/site-packages/openevolve/controller.py", line 87, in __init__
self.config = load_config(config_path)
env/lib/python3.10/site-packages/openevolve/config.py", line 456, in load_config
config = Config.from_yaml(config_path)
env/lib/python3.10/site-packages/openevolve/config.py", line 337, in from_yaml
return cls.from_dict(config_dict)
File "env/lib/python3.10/site-packages/openevolve/config.py", line 359, in from_dict
config.llm = LLMConfig(**llm_dict)
TypeError: LLMConfig.__init__() got an unexpected keyword argument 'reasoning_effort'
I think this is as simple as adding reasoning_effort to the data class. However, I want to be sure that we are not doing something silly
here is part of the yaml we are using for our config
log_level: INFO
llm:
models:
- name: gpt-oss-120b
weight: 1.0
api_base: <endpoint>
api_key: <key>
temperature: 0.7
max_tokens: 100000
timeout: 5000
retries: 1000000
reasoning_effort: high
Metadata
Metadata
Assignees
Labels
No labels