Skip to content

Conversation

@santoshkumarradha
Copy link
Member

This commit introduces several improvements to the agent testing framework and error handling mechanisms.

Key changes include:

  • Refactoring stubDispatcher into more focused test setups, improving test clarity and reusability.
  • Enhancing agent communication error handling within ExecuteHandler and ExecuteAsyncHandler to properly report agent-side issues to the user.
  • Introducing a dedicated test for agent errors in ExecuteHandler to ensure robust handling of 5xx responses from agents.
  • Adding a sentinel value for corrupted JSON data to decodePayload in UI executions to prevent display of partial or incorrect previews.
  • Improving the precision of agent IP detection and error handling in agent.py to make callbacks more reliable.
  • Addressing various import errors and simplifying dependency checks in agent_server.py.
  • Enhancing the AgentAI class to use litellm.utils.token_counter for more accurate token counting and prompt trimming, improving LLM interaction efficiency.
  • Refining sample agent fixtures and helper functions to streamline test writing and improve overall test stability.
  • Upgrading execution_state and async_execution_manager to use more robust type hinting and internal logic.
  • Adding more specific error handling and logging for potential issues during agent startup and communication, such as network connection problems or malformed responses.
  • Making tests more resilient by simplifying conditional imports for optional dependencies, ensuring test runs even if some libraries are not installed.

Refactor agent testing and improve error handling

This commit enhances agent testing by refactoring stub dispatchers and improving error handling for agent communications. Specifically, it adds dedicated tests for agent errors, sentinel values for corrupted JSON previews, and refines IP detection and imports. AI capabilities are improved with better token counting and prompt trimming, and agent setup is streamlined with more resilient fixtures. Overall, this commit boosts test stability and robustness in agent interactions.

Summary

Testing

  • ./scripts/test-all.sh
  • Additional verification (please describe):

Checklist

  • I updated documentation where applicable.
  • I added or updated tests (or none were needed).
  • I updated CHANGELOG.md (or this change does not warrant a changelog entry).

Screenshots (if UI-related)

Related issues

This commit introduces several improvements to the agent testing framework and error handling mechanisms.

Key changes include:
- Refactoring `stubDispatcher` into more focused test setups, improving test clarity and reusability.
- Enhancing agent communication error handling within `ExecuteHandler` and `ExecuteAsyncHandler` to properly report agent-side issues to the user.
- Introducing a dedicated test for agent errors in `ExecuteHandler` to ensure robust handling of `5xx` responses from agents.
- Adding a sentinel value for corrupted JSON data to `decodePayload` in UI executions to prevent display of partial or incorrect previews.
- Improving the precision of agent IP detection and error handling in `agent.py` to make callbacks more reliable.
- Addressing various import errors and simplifying dependency checks in `agent_server.py`.
- Enhancing the `AgentAI` class to use `litellm.utils.token_counter` for more accurate token counting and prompt trimming, improving LLM interaction efficiency.
- Refining sample agent fixtures and helper functions to streamline test writing and improve overall test stability.
- Upgrading `execution_state` and `async_execution_manager` to use more robust type hinting and internal logic.
- Adding more specific error handling and logging for potential issues during agent startup and communication, such as network connection problems or malformed responses.
- Making tests more resilient by simplifying conditional imports for optional dependencies, ensuring test runs even if some libraries are not installed.

Refactor agent testing and improve error handling

This commit enhances agent testing by refactoring stub dispatchers and improving error handling for agent communications. Specifically, it adds dedicated tests for agent errors, sentinel values for corrupted JSON previews, and refines IP detection and imports. AI capabilities are improved with better token counting and prompt trimming, and agent setup is streamlined with more resilient fixtures. Overall, this commit boosts test stability and robustness in agent interactions.
Adds a `required-checks` job to each workflow that acts as a summary, ensuring all preceding critical jobs have succeeded.

This also includes minor adjustments in the `control-plane.yml` file:
- Removes `continue-on-error: true` from the lint step, making lint failures critical.
- Updates the build output path for the server binary.
This commit introduces robust workflow tracking capabilities by instrumenting agent execution and communication.

Key changes include:
- Refactoring `AgentWorkflow` to handle execution context management, event notification, and interaction with `ExecutionContext`.
- Enhancing `BrainClient` to manage the active workflow context and propagate necessary headers.
- Modifying the `reasoner` decorator to seamlessly integrate with the new workflow system, capturing execution details and sending events to the Brain server.
- Introducing new helper functions for building execution contexts, composing event payloads, and handling asynchronous event publishing.
- Improving `ExecutionContext.to_headers` to include more relevant workflow information.
- Updating the `brain_binary` fixture to skip tests if brain server sources are unavailable.

These changes enable detailed tracking of agent reasoning steps, their inputs, outputs, and performance, providing valuable insights into agent behavior and workflow execution.
Removes the import of the asyncio module from the agent_workflow.py file, as it is not being used.
Cleans up the management of execution and client contexts within the agent workflow.

This change streamlines how parent contexts are retrieved and how execution contexts are built, improving readability and maintainability. It also ensures that client context is correctly propagated when workflow contexts are present.
Updates the Go version used in the control-plane GitHub Actions workflows from 1.23 to 1.24.2.

This ensures that the workflows are using the latest stable Go release for improved compatibility and performance.
@santoshkumarradha santoshkumarradha merged commit 6f39acb into main Nov 5, 2025
7 of 9 checks passed
@santoshkumarradha santoshkumarradha deleted the santosh/ci-cd branch November 5, 2025 03:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants