Skip to content

[V1.2.3] Change model but requests still made to default #193

@ahmad-ajmal

Description

@ahmad-ajmal

This error log shows the request was sent to gemini-2.5-pro

2026-04-14 14:13:34.074 | WARNING  | agent_core.core.llm.google_gemini_client:_post_json:487 - [GEMINI ERROR] Status: 503, Body: {'error': {'code': 503, 'message': 'This model is currently experiencing high demand. Spikes in demand are usually temporary. Please try again later.', 'status': 'UNAVAILABLE'}}
2026-04-14 14:13:34.075 | ERROR    | agent_core.core.impl.llm.interface:_generate_gemini:1382 - Error calling Gemini API: 503 Server Error: Service Unavailable for url: https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-pro:generateContent?key=AIzaSyCM3f9RbHN7BWgVz4bxS-N1AAbth1rBePw
2026-04-14 14:13:34.076 | ERROR    | agent_core.core.impl.llm.interface:_generate_gemini:1404 - [GEMINI_ERROR] HTTPError: 503 Server Error: Service Unavailable for url: https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-pro:generateContent?key=AIzaSyCM3f9RbHN7BWgVz4bxS-N1AAbth1rBePw
2026-04-14 14:13:34.077 | ERROR    | agent_core.core.impl.llm.interface:_generate_response_sync:382 - [LLM ERROR] LLM provider returned error: HTTPError: 503 Server Error: Service Unavailable for url: https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-pro:generateContent?key=AIzaSyCM3f9RbHN7BWgVz4bxS-N1AAbth1rBePw
2026-04-14 14:13:34.077 | WARNING  | agent_core.core.impl.llm.interface:_generate_response_sync:385 - [LLM CONSECUTIVE FAILURE] Count: 5/5
2026-04-14 14:13:34.085 | ERROR    | app.agent_base:_handle_react_error:1202 - [REACT ERROR] LLM calls have failed 5 consecutive times. Task aborted to prevent infinite retries. Please check your LLM configuration.
Traceback (most recent call last):
  File "/home/ahmad/Work/CraftOS/CraftBot/app/agent_base.py", line 413, in react
    await self._handle_complex_task_workflow(trigger_data, session_id)
  File "/home/ahmad/Work/CraftOS/CraftBot/app/agent_base.py", line 835, in _handle_complex_task_workflow
    action_decisions, reasoning = await self._select_action(trigger_data)
  File "/home/ahmad/Work/CraftOS/CraftBot/agent_core/decorators/profiler.py", line 707, in async_wrapper
    return await fn(*args, **kwargs)
  File "/home/ahmad/Work/CraftOS/CraftBot/app/agent_base.py", line 921, in _select_action
    return await self._select_action_in_task(trigger_data.query, trigger_data.session_id)
  File "/home/ahmad/Work/CraftOS/CraftBot/agent_core/decorators/profiler.py", line 707, in async_wrapper
    return await fn(*args, **kwargs)
  File "/home/ahmad/Work/CraftOS/CraftBot/app/agent_base.py", line 950, in _select_action_in_task
    action_decisions = await self.action_router.select_action_in_task(
  File "/home/ahmad/Work/CraftOS/CraftBot/agent_core/decorators/profiler.py", line 707, in async_wrapper
    return await fn(*args, **kwargs)
  File "/home/ahmad/Work/CraftOS/CraftBot/agent_core/core/impl/action/router.py", line 250, in select_action_in_task
    decision = await self._prompt_for_decision(
  File "/home/ahmad/Work/CraftOS/CraftBot/agent_core/core/impl/action/router.py", line 599, in _prompt_for_decision
    raw_response = await self.llm_interface.generate_response_async(system_prompt, current_prompt)
  File "/home/ahmad/Work/CraftOS/CraftBot/agent_core/decorators/profiler.py", line 707, in async_wrapper
    return await fn(*args, **kwargs)
  File "/home/ahmad/Work/CraftOS/CraftBot/agent_core/core/impl/llm/interface.py", line 436, in generate_response_async
    return await asyncio.to_thread(
  File "/home/ahmad/miniconda3/envs/craftbot/lib/python3.10/asyncio/threads.py", line 25, in to_thread
    return await loop.run_in_executor(None, func_call)
  File "/home/ahmad/miniconda3/envs/craftbot/lib/python3.10/asyncio/futures.py", line 285, in __await__
    yield self  # This tells Task to wait for completion.
  File "/home/ahmad/miniconda3/envs/craftbot/lib/python3.10/asyncio/tasks.py", line 304, in __wakeup
    future.result()
  File "/home/ahmad/miniconda3/envs/craftbot/lib/python3.10/asyncio/futures.py", line 201, in result
    raise self._exception.with_traceback(self._exception_tb)
  File "/home/ahmad/miniconda3/envs/craftbot/lib/python3.10/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/home/ahmad/Work/CraftOS/CraftBot/agent_core/core/impl/llm/interface.py", line 389, in _generate_response_sync
    raise LLMConsecutiveFailureError(self._consecutive_failures)
agent_core.core.impl.llm.errors.LLMConsecutiveFailureError: LLM calls have failed 5 consecutive times. Task aborted to prevent infinite retries. Please check your LLM configuration.

2026-04-14 14:13:34.089 | WARNING  | app.agent_base:_handle_react_error:1234 - [REACT ERROR] LLMConsecutiveFailureError detected - cancelling task f1cdd8 to prevent infinite retry loop.
2026-04-14 14:13:34.099 | WARNING  | agent_core.core.impl.task.manager:_log_to_task_history:702 - [TaskManager] TASK_HISTORY.md not found at /home/ahmad/Work/CraftOS/CraftBot/agent_file_system/TASK_HISTORY.md

Even though I set it to flash
Image

Metadata

Metadata

Assignees

No one assigned

    Labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions