Skip to content

Introduce SuspendObjectStream and expand tool schema validation#52

Merged
JohnRichard4096 merged 15 commits intomainfrom
feat/reasoning-and-streaming
Apr 19, 2026
Merged

Introduce SuspendObjectStream and expand tool schema validation#52
JohnRichard4096 merged 15 commits intomainfrom
feat/reasoning-and-streaming

Conversation

@JohnRichard4096
Copy link
Copy Markdown
Member

@JohnRichard4096 JohnRichard4096 commented Apr 19, 2026

close #51

Summary by Sourcery

Introduce a reusable SuspendObjectStream abstraction for suspendable streaming, refactor ChatObject and reasoning agents to use it with enhanced reasoning and usage tracking, expand tool schema validation and documentation, and bump the library version.

Enhancements:

  • Extract a generic SuspendObjectStream class to provide suspend/resume and single-consumer streaming with callback support, and refactor ChatObject to inherit from it while tracking extra token usage.
  • Enhance built-in ReAct-style agent reasoning flow to use a dedicated think_and_reason tool schema, separate reasoning summary and content prompts, enrich streamed metadata, and correctly accumulate token usage from multiple calls.
  • Extend FunctionPropertySchema to support comprehensive JSON Schema-style constraints (numeric, string, array, object, enum/const, union types, defaults) with stricter validators, and update examples to use these validations.
  • Improve get_last_response to optionally stream intermediate chunks to another SuspendObjectStream while capturing the final UniResponse.
  • Refine reasoning and security templates, suspend mechanism docs, ChatObject and SuspendObjectStream API docs, tool registration examples, MCP integration docs, and embedding docs, while removing version-gated wording and updating supported Python versions.

Build:

  • Bump package version to 0.8.2 and update Python support in documentation to 3.10–3.13.

Documentation:

  • Document the new SuspendObjectStream type in both English and Chinese API references, and expand guides on tools, streaming, suspension, validation, security, built-ins, extensions, and function implementation to reflect the new streaming and reasoning capabilities.

Tests:

  • Add tests for SuspendObjectStream suspend/resume behavior and move existing ChatObject suspend tests to this new abstraction.

@sourcery-ai
Copy link
Copy Markdown
Contributor

sourcery-ai Bot commented Apr 19, 2026

Reviewer's Guide

Refactors ChatObject to inherit a new generic SuspendObjectStream for unified suspend/resume and streaming behavior, adds usage aggregation utilities, extends function-parameter JSON schema validation and docs, enhances ReAct reasoning flows and built-in reasoning tool schema, and updates documentation and tests to match the new streaming and validation model.

Sequence diagram for enhanced reasoning flow with think_and_reason tool

sequenceDiagram
    actor User
    participant ChatClient as ChatClient
    participant ChatObject as ChatObject
    participant Agent as BaseReActAgentStrategy
    participant ToolsCaller as tools_caller
    participant ReasonTool as REASONING_TOOL
    participant LLM as call_completion

    User->>ChatClient: send user_input
    ChatClient->>ChatObject: create ChatObject(...)
    ChatClient->>ChatObject: begin()
    ChatObject->>Agent: _run_strategy()
    Agent->>Agent: _generate_reasoning_msg(tools_ctx, then)
    Agent->>ToolsCaller: tools_caller(reasoning_trigger_msg, [REASONING_TOOL,...])
    ToolsCaller->>ReasonTool: invoke think_and_reason
    ReasonTool-->>ToolsCaller: UniResponse~None,list~ToolCall~~
    ToolsCaller-->>Agent: tool_response with ToolCall
    Agent->>Agent: _generate_reasoning_content(tool_call, reasoning_trigger_msg)
    Agent->>ChatObject: yield_response(MessageWithMetadata summary)
    Agent->>LLM: call_completion(reasoning_trigger_msg,...)
    LLM-->>Agent: streaming chunks
    loop reasoning content streaming
        Agent->>ChatObject: yield_response(MessageWithMetadata reasoning_chunk)
    end
    LLM-->>Agent: UniResponse~str,None~ ct
    Agent->>ChatObject: update extra_usage via gather_usage
    Agent-->>Agent: then(self, tool_call, ct)
    Agent-->>ChatObject: append reasoning to context
    ChatObject-->>ChatClient: stream final answer
    ChatClient-->>User: display reasoning-aware response
Loading

Sequence diagram for get_last_response with optional streaming

sequenceDiagram
    participant Producer as AsyncGenerator
    participant GetLast as get_last_response
    participant Target as SuspendObjectStream

    loop iterate generator
        Producer-->>GetLast: yield chunk
        alt chunk is UniResponse
            GetLast->>GetLast: resp = chunk
        else chunk is RESPONSE_TYPE and Target provided
            GetLast->>Target: yield_response(wrapper(chunk))
        else
            GetLast->>GetLast: ignore chunk
        end
    end
    alt resp is None
        GetLast-->>GetLast: raise RuntimeError
    else
        GetLast-->>Caller: return resp (UniResponse~str,None~)
    end
Loading

Class diagram for SuspendObjectStream and ChatObject streaming refactor

classDiagram
    class SuspendObjectStream~ObjectTypeT~ {
        -ObjectSendStream _send_stream
        -ObjectReceiveStream _receive_stream
        -CALLBACK_TYPE _callback_fun
        -aiologic.Lock _callback_lock
        -bool _queue_done
        -bool _has_consumer
        -float _q_tout
        -tuple~str~ _suspend_tags
        -asyncio.Future __suspend_signal
        -asyncio.Future __resume_signal
        -object __done_marker
        +SuspendObjectStream(queue_size int=45, queue_timeout float|None=10.0, callback CALLBACK_TYPE|None=None)
        +wait_to_suspend(tags str, timeout float|None) async
        +resume() void
        +queue_closed() bool
        +set_queue_done() async
        +yield_response(response ObjectTypeT) async
        +yield_response_iteration(iterator AsyncGenerator~ObjectTypeT,None~) async
        +get_response_generator() AsyncGenerator~ObjectTypeT,None~
        +set_callback_func(func CALLBACK_TYPE) void
        +suspend(func Callable, tag str|None) Callable$ static
        +suspend_with_tag(tag str) Callable$ static
        -_wait_for_continue(tag str|None) async bool
        -_response_generator() async AsyncGenerator~ObjectTypeT,None~
        -_put_to_queue(item Any) async
    }

    class ChatObject {
        +str stream_id
        +str user_input
        +Message user_message
        +Message system_message
        +datetime last_call
        +str session_id
        +UniResponse~str,None~ response
        +UniResponseUsage~int~ extra_usage
        +ModelPreset preset
        +AmritaConfig config
        +SessionData session
        +type AgentStrategy strategy
        +Template template
        +dict~str,Any~ jinja2_vars
        +bool _is_running
        +bool _is_done
        +Task~None~ _task
        +BaseException _err
        +float _q_tout
        +dict~str,Any~ _hook_kwargs
        +tuple~Any,...~ _hook_args
        +tuple~type~BaseException~~ _raised_exc
        +ChatObject(..., queue_size int=45, queue_timeout float|None=10.0, callback RESPONSE_CALLBACK_TYPE|None=None)
        +begin() ChatObject
        +get_exception() BaseException|None
        +get_response_generator() AsyncGenerator~RESPONSE_TYPE,None~
        +full_response() async str
        +set_queue_done() async
        +yield_response(response RESPONSE_TYPE) async
        +yield_response_iteration(iterator AsyncGenerator~RESPONSE_TYPE,None~) async
        +set_callback_func(func RESPONSE_CALLBACK_TYPE) void
        +wait_to_suspend(tags str, timeout float|None) async
        +resume() void
        +_entry() async
        +_run() async
        +_run_strategy() async
        +_run_agent(ctx StrategyContext) async
        +_process_chat(send_messages CONTENT_LIST_TYPE) async UniResponse~str,None~
    }

    class UniResponseUsage~T~ {
        +T prompt_tokens
        +T completion_tokens
        +T total_tokens
    }

    SuspendObjectStream <|-- ChatObject : inherits
    UniResponseUsage~int~ --> ChatObject : extra_usage
Loading

Class diagram for FunctionPropertySchema JSON Schema extensions

classDiagram
    class FunctionPropertySchema~T~ {
        +str|list~str~ type
        +str description
        +bool required
        +list~T~ enum
        +Any const
        +Any default
        +float minimum
        +float maximum
        +bool exclusiveMinimum
        +bool exclusiveMaximum
        +float multipleOf
        +int minLength
        +int maxLength
        +str pattern
        +FunctionPropertySchema properties
        +list~str~ required
        +FunctionPropertySchema items
        +int minItems
        +int maxItems
        +bool uniqueItems
        +bool|dict~str,Any~ additionalProperties
        +str format
        +bool nullable
        +FunctionPropertySchema validator() Self
    }

    FunctionPropertySchema~T~ ..> FunctionPropertySchema~T~ : recursive properties/items
Loading

File-Level Changes

Change Details Files
Refactor ChatObject to use generic SuspendObjectStream for streaming, suspend/resume, and callbacks, and adjust associated utilities and usage accounting.
  • Make ChatObject inherit from SuspendObjectStream[RESPONSE_TYPE] and remove its bespoke streaming, suspend/resume, and callback implementation.
  • Initialize the base SuspendObjectStream in ChatObject.init with queue_size, callback, and queue_timeout, and rely on inherited get_response_generator, yield_response, wait_to_suspend, resume, and callback behavior.
  • Replace @suspend decorators with @SuspendObjectStream.suspend on ChatObject internal methods, and update ChatObject docs to describe the new inheritance, parameters, and streaming semantics.
  • Introduce gather_usage and n2zero helpers to accumulate UniResponseUsage values and use them in ChatObject._run and agent reasoning flows to track extra token usage.
src/amrita_core/chatmanager.py
src/amrita_core/streaming.py
src/amrita_core/utils.py
docs/guide/api-reference/classes/ChatObject.md
docs/zh/guide/api-reference/classes/ChatObject.md
tests/test_chatobject.py
tests/test_object_stream.py
docs/guide/concepts/suspend.md
docs/zh/guide/concepts/suspend.md
Enhance FunctionPropertySchema to support richer JSON Schema constraints and unions, with corresponding validation logic and docs/examples.
  • Add const, default, numeric (minimum/maximum/exclusive/multipleOf), string (minLength/maxLength/pattern/format), array and object (additionalProperties, nullable) constraint fields to FunctionPropertySchema.
  • Update the validator to support list-type unions, derive flags for object/array/string/numeric/boolean, and enforce that only compatible constraint groups are set per included type, with detailed error messages.
  • Extend English and Chinese tool concept docs and function-implementation docs with advanced FunctionPropertySchema usage examples, validation rules, and updated tool schema examples using these constraints.
  • Update various extension integration docs to show tools defined with stricter FunctionPropertySchema validation.
src/amrita_core/tools/models.py
docs/guide/concepts/tool.md
docs/zh/guide/concepts/tool.md
docs/guide/function-implementation.md
docs/zh/guide/function-implementation.md
docs/guide/extensions-integration/index.md
docs/zh/guide/extensions-integration/index.md
Improve built-in reasoning tooling and ReAct strategies to separate reasoning summary from content generation, stream reasoning chunks with metadata, and track extra usage.
  • Change REASONING_TEMPLATE into a detailed reasoning summary instruction template and add REASONING_CONTENT_TEMPLATE for generating full internal reasoning paragraphs, including tool list and language guidance.
  • Update the built-in think_and_reason tool schema to take last_step and summary instead of a generic content field, adjusting required fields accordingly and updating docs describing its parameters.
  • Refactor BaseReActAgentStrategy to introduce _generate_reasoning_content, which calls the LLM with the content template, streams chunks through ChatObject.yield_response with metadata, aggregates usage into chat_object.extra_usage, and is reused from both generic and specific reasoning flows.
  • Change _generate_reasoning_msg and _execute_tool_loop to work with ToolCall plus tool responses, pass through to strategy-specific _append_reasoning and _build_stop_response_and_append, and update Hybrid and ReAct strategies to append reasoning and stop tool results in a more structured way (assistant + ToolResult pairing).
src/amrita_core/builtins/agent.py
src/amrita_core/builtins/consts.py
src/amrita_core/builtins/tools.py
docs/guide/builtins.md
docs/zh/guide/builtins.md
Extend libchat.get_last_response to support simultaneous streaming of intermediate chunks to a SuspendObjectStream while capturing the final UniResponse, and document it.
  • Define RESPONSE_TYPE alias in libchat and update get_last_response to accept a generator yielding RESPONSE_TYPE or UniResponse, plus optional yield_to and yield_to_wrapper parameters.
  • Implement logic to iterate the generator once, forwarding non-UniResponse chunks to yield_to (optionally wrapped) while keeping the last UniResponse, raising if none is found.
  • Document the new get_last_response signature, parameters, behavior, and usage patterns in both English and Chinese function-implementation guides, including examples integrating with ChatObject or a custom SuspendObjectStream.
src/amrita_core/libchat.py
docs/guide/function-implementation.md
docs/zh/guide/function-implementation.md
Add and document the generic SuspendObjectStream type, adjust suspend docs, and update miscellaneous docs and metadata to match the new behavior.
  • Introduce src/amrita_core/streaming.py implementing SuspendObjectStream with AnyIO memory object streams, callback support, and suspend/resume decorators, plus tests for its suspend behavior.
  • Add English and Chinese API reference pages for SuspendObjectStream and link them from index docs alongside other core types.
  • Update suspend concept docs (EN/ZH) to describe SuspendObjectStream as the implementation of the suspend mechanism, show usage of SuspendObjectStream.suspend / suspend_with_tag, and note that ChatObject inherits from it.
  • Adjust security, getting-started, embedding, MCP integration, and API reference docs to remove version qualifiers (0.8.0+) from features that are now baseline, clarify support matrices, and mention SuspendObjectStream where relevant; bump package version to 0.8.2 and narrow Python support to <=3.13.
src/amrita_core/streaming.py
tests/test_object_stream.py
docs/guide/api-reference/classes/SuspendObjectStream.md
docs/zh/guide/api-reference/classes/SuspendObjectStream.md
docs/guide/concepts/suspend.md
docs/zh/guide/concepts/suspend.md
docs/guide/security-mechanisms.md
docs/zh/guide/security-mechanisms.md
docs/guide/getting-started/index.md
docs/zh/guide/getting-started/index.md
docs/guide/api-reference/index.md
docs/zh/guide/api-reference/index.md
docs/guide/api-reference/classes/EmbeddingChunk.md
docs/zh/guide/api-reference/classes/EmbeddingChunk.md
docs/guide/extensions-integration/mcp-server-integration.md
docs/zh/guide/extensions-integration/mcp-server-integration.md
pyproject.toml

Assessment against linked issues

Issue Objective Addressed Explanation
#51 Correct the agent stop/reasoning tool processing in BaseReActAgentStrategy/ReActAgentStrategy so that the STOP and reasoning tools are handled properly (including tool call parsing, reasoning content generation, and stop response appending).

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@JohnRichard4096
Copy link
Copy Markdown
Member Author

@sourcery-ai title

@sourcery-ai sourcery-ai Bot changed the title Feat/reasoning and streaming Introduce SuspendObjectStream and expand tool schema validation Apr 19, 2026
Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 6 issues, and left some high level feedback:

  • In chatmanager.py, RESPONSE_TYPE is still used (e.g., ChatObject(SuspendObjectStream[RESPONSE_TYPE]), RESPONSE_CALLBACK_TYPE) but its local TypeAlias definition was removed; either reintroduce the alias or import it from a shared module to avoid a NameError/type-checking issues.
  • The updated FunctionPropertySchema.validator uses elif has_array after if has_object, so schemas with union types that include both "object" and "array" will only get object validation; consider splitting these into independent if has_object: and if has_array: blocks so each type branch is validated when present in a union.
  • In SuspendObjectStream, _wait_for_continue is effectively part of the public suspend API (used in tests and docs) but remains a private method; if it is intended to be called by users, consider renaming or wrapping it with a public helper to make this clearer and reduce reliance on underscored internals.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- In `chatmanager.py`, `RESPONSE_TYPE` is still used (e.g., `ChatObject(SuspendObjectStream[RESPONSE_TYPE])`, `RESPONSE_CALLBACK_TYPE`) but its local `TypeAlias` definition was removed; either reintroduce the alias or import it from a shared module to avoid a NameError/type-checking issues.
- The updated `FunctionPropertySchema.validator` uses `elif has_array` after `if has_object`, so schemas with union types that include both `"object"` and `"array"` will only get object validation; consider splitting these into independent `if has_object:` and `if has_array:` blocks so each type branch is validated when present in a union.
- In `SuspendObjectStream`, `_wait_for_continue` is effectively part of the public suspend API (used in tests and docs) but remains a private method; if it is intended to be called by users, consider renaming or wrapping it with a public helper to make this clearer and reduce reliance on underscored internals.

## Individual Comments

### Comment 1
<location path="src/amrita_core/utils.py" line_range="94-98" />
<code_context>
         self.value = value
+
+
+def gather_usage(
+    base: UniResponseUsage[int],
+    *args: UniResponseUsage[int]
+    | UniResponseUsage[None]
+    | UniResponseUsage[int | None]
+    | None,
+) -> UniResponseUsage[int]:
+    """Gather usages
+
+    Args:
+        base(UniResponseUsage[int]): Base object of usage.
+        *args: Usages to gather.
+
+    Returns:
+        UniResponseUsage[int]: the gathered usage (base)
+    """
+    u = base
+    for usage in args:
+        if usage is None:
+            continue
+        u.prompt_tokens += n2zero(usage.prompt_tokens)
+        u.completion_tokens += n2zero(usage.completion_tokens)
+        u.total_tokens += usage.total_tokens or n2zero(usage.prompt_tokens) + n2zero(
+            usage.completion_tokens
+        )
+    return u
</code_context>
<issue_to_address>
**suggestion (bug_risk):** gather_usage may miscount total_tokens in some edge cases and relies on truthiness of numeric fields.

Using `usage.total_tokens or ...` conflates `0` and `None`. A legitimate `total_tokens == 0` will be treated as falsy and replaced with `prompt + completion`, which can lead to double-counting or inconsistent totals. It also means `total_tokens` is recomputed whenever it’s `None`, so the aggregated total may not match providers that define totals differently. Please branch explicitly on `is None`, e.g.:

```python
if usage.total_tokens is not None:
    u.total_tokens += usage.total_tokens
else:
    u.total_tokens += n2zero(usage.prompt_tokens) + n2zero(usage.completion_tokens)
```

```suggestion
        u.prompt_tokens += n2zero(usage.prompt_tokens)
        u.completion_tokens += n2zero(usage.completion_tokens)
        if usage.total_tokens is not None:
            u.total_tokens += usage.total_tokens
        else:
            u.total_tokens += n2zero(usage.prompt_tokens) + n2zero(
                usage.completion_tokens
            )
```
</issue_to_address>

### Comment 2
<location path="src/amrita_core/builtins/agent.py" line_range="128-130" />
<code_context>
+            MessageWithMetadata(
+                summary,
+                {
+                    "type": "reasoning",
+                    "extra_type": "pre_resolve",
+                    "last_strp": last_step,
+                    "summary": summary,
+                },
</code_context>
<issue_to_address>
**issue (bug_risk):** Metadata key "last_strp" looks like a typo and may break downstream consumers.

In the emitted reasoning metadata, the key is currently:

```python
"last_strp": last_step,
```
Please rename this to `"last_step"` to match the variable name and any existing consumers expecting that field.
</issue_to_address>

### Comment 3
<location path="tests/test_object_stream.py" line_range="10" />
<code_context>
+
+@pytest.mark.asyncio
+async def test_chatobject_suspend_tags():
+    obj = SuspendObjectStream()
+    suspend = False
+
</code_context>
<issue_to_address>
**suggestion (testing):** Add tests for queue/stream lifecycle and single-consumer guarantees

Since `SuspendObjectStream` manages an internal queue and enforces a single consumer, please add tests that:

- Push multiple items with `yield_response`, call `set_queue_done()`, then iterate `get_response_generator()` and assert all items are yielded in order, the generator stops at the done marker, and `queue_closed()` is `True`.
- Call `get_response_generator()` twice (or once plus setting a callback) and assert a `RuntimeError("Response is already being consumed.")` on the second consumer.

These will cover both normal streaming and the single-consumer constraint.

Suggested implementation:

```python
import asyncio

import pytest

from amrita_core.streaming import SuspendObjectStream


@pytest.mark.asyncio
async def test_chatobject_suspend_tags():
    obj = SuspendObjectStream()
    suspend = False

    # Basic sanity checks for the suspend object stream
    assert isinstance(obj, SuspendObjectStream)
    assert suspend is False


@pytest.mark.asyncio
async def test_suspend_object_stream_yields_items_in_order_and_closes_queue():
    obj = SuspendObjectStream()

    # Push multiple items into the internal queue
    await obj.yield_response("first")
    await obj.yield_response("second")
    await obj.yield_response("third")

    # Mark the queue as done
    obj.set_queue_done()

    # Consume all items from the response generator
    results = []
    async for item in obj.get_response_generator():
        results.append(item)

    # All items should be yielded in order and the queue should be closed
    assert results == ["first", "second", "third"]
    assert obj.queue_closed() is True


@pytest.mark.asyncio
async def test_suspend_object_stream_enforces_single_consumer():
    obj = SuspendObjectStream()

    # First consumer acquires the response generator
    gen1 = obj.get_response_generator()
    assert gen1 is not None

    # Second consumer attempt should raise a RuntimeError
    with pytest.raises(RuntimeError, match="Response is already being consumed."):
        obj.get_response_generator()

```

If `yield_response` is a synchronous method in your implementation, remove the `await` keywords before `obj.yield_response(...)`.  
If `set_queue_done` or `queue_closed` are asynchronous in your codebase, update the test to `await` them accordingly (e.g. `await obj.set_queue_done()` / `await obj.queue_closed()`).
</issue_to_address>

### Comment 4
<location path="tests/test_object_stream.py" line_range="33" />
<code_context>
-
-
-@pytest.mark.asyncio
-async def test_chatobject_suspend():
-    obj = ChatObject(
-        train={"role": "system", "content": "system message"},
</code_context>
<issue_to_address>
**suggestion (testing):** Test callback mode of SuspendObjectStream in addition to queue mode

These tests only cover the queue pathway (no callback set). Please also add a test for the callback mode that:

- Constructs `SuspendObjectStream` with a callback (or via `set_callback_func`).
- Calls `yield_response` multiple times and asserts the callback receives the expected values (e.g., via an async-safe accumulator/event).
- Optionally asserts that `get_response_generator()` with a callback set raises the expected `RuntimeError`.

That way both modes are exercised and protected against regressions.

Suggested implementation:

```python
    try:
        await obj.wait_to_suspend(timeout=2)
        obj.resume()
        await asyncio.wait_for(hd, 0.2)
        assert suspend, "Suspend not called"
    finally:
        hd.cancel()


@pytest.mark.asyncio
async def test_suspendobjectstream_callback_mode():
    obj = SuspendObjectStream()
    received: list[Any] = []
    all_received = asyncio.Event()

    def callback(value: Any) -> None:
        received.append(value)
        # Once we've seen multiple responses, signal the test to continue
        if len(received) >= 2:
            all_received.set()

    # Configure callback mode
    obj.set_callback_func(callback)

    # In callback mode, responses should be delivered via the callback
    await obj.yield_response("first")
    await obj.yield_response("second")

    await asyncio.wait_for(all_received.wait(), timeout=1.0)
    assert received == ["first", "second"]

    # In callback mode, queue-based consumption should not be available
    with pytest.raises(RuntimeError):
        _ = obj.get_response_generator()

```

This test assumes:
1. `SuspendObjectStream.set_callback_func` exists and accepts a sync callback.
2. `SuspendObjectStream.yield_response` is `async` and delivers the value to the callback.
3. `SuspendObjectStream.get_response_generator()` raises `RuntimeError` when a callback is set.
If any of these differ in your implementation, adjust the test accordingly (e.g., use the constructor to pass the callback, tweak the exception type/message, or adapt to a sync `yield_response`).
Also ensure `Any` is imported (`from typing import Any`) at the top of the test file if it is not already.
</issue_to_address>

### Comment 5
<location path="docs/guide/function-implementation.md" line_range="127" />
<code_context>
 #### Exception Handling Best Practices

-Since version 0.8.0, the default `on_exception()` method in [AgentStrategy](../api-reference/classes/AgentStrategy.md) no longer raises exceptions by default. This change provides more flexibility for custom error handling:
+Since the default `on_exception()` method in [AgentStrategy](../api-reference/classes/AgentStrategy.md) no longer raises exceptions by default. This change provides more flexibility for custom error handling:

 ```python
</code_context>
<issue_to_address>
**issue (typo):** This sentence starting with "Since" is grammatically incomplete and should be rephrased.

"Since the default `on_exception()` method..." is a sentence fragment. Either drop "Since" (e.g., "The default `on_exception()` method... no longer raises...") or add a main clause after the comma (e.g., "Since the default ... no longer raises exceptions by default, this change provides more flexibility...").

```suggestion
The default `on_exception()` method in [AgentStrategy](../api-reference/classes/AgentStrategy.md) no longer raises exceptions by default. This change provides more flexibility for custom error handling:
```
</issue_to_address>

### Comment 6
<location path="docs/guide/builtins.md" line_range="24" />
<code_context>
 - **Parameters**:
-  - `content`: Describe what needs to be done next (required).
+  - `last_step`: The last step you took (if there are no steps that you had done, please leave this blank).
+  - `summary`: What are you thinking about (not thinking content) - a brief summary of your current focus or intention.

 ### 9.1.3 PROCESS_MESSAGE (Process Message Tool)
</code_context>
<issue_to_address>
**nitpick (typo):** The parenthetical phrase "not thinking content" is awkward and could be clarified.

Consider rephrasing this parenthetical to something like "(not the full thinking process)" or "(not your detailed reasoning)" so the meaning is grammatically clear to readers.

```suggestion
  - `summary`: What are you thinking about (not your detailed reasoning) - a brief summary of your current focus or intention.
```
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment thread src/amrita_core/utils.py Outdated
Comment thread src/amrita_core/builtins/agent.py Outdated
Comment thread tests/test_object_stream.py
Comment thread tests/test_object_stream.py
Comment thread docs/guide/function-implementation.md Outdated
Comment thread docs/guide/builtins.md Outdated
@cloudflare-workers-and-pages
Copy link
Copy Markdown

cloudflare-workers-and-pages Bot commented Apr 19, 2026

Deploying amritacore with  Cloudflare Pages  Cloudflare Pages

Latest commit: e697213
Status: ✅  Deploy successful!
Preview URL: https://b1b44ec2.amritacore.pages.dev
Branch Preview URL: https://feat-reasoning-and-streaming.amritacore.pages.dev

View logs

JohnRichard4096 and others added 4 commits April 19, 2026 21:29
Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>
Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>
Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>
@JohnRichard4096 JohnRichard4096 merged commit 6a30baa into main Apr 19, 2026
3 checks passed
@JohnRichard4096 JohnRichard4096 deleted the feat/reasoning-and-streaming branch April 19, 2026 14:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[BUG] Agent srop process wrong.

1 participant