Conversation
Reviewer's GuideRefactors the hook matcher system to support a generic dependency injection mechanism (Depends/DependsFactory), priority-based handler registries, and new preset fallback events, while adding an Anthropic adapter, CoT thinking-filtering, preset fallback retries, and accompanying docs/tests and config updates. Sequence diagram for triggering events with dependency injectionsequenceDiagram
actor User
participant ChatObject
participant MatcherManager
participant EventRegistry
participant DependsFactory
participant Handler as EventHandler
User->>ChatObject: create ChatObject(hook_args, hook_kwargs)
User->>ChatObject: send_message()
ChatObject->>ChatObject: build BaseEvent
ChatObject->>MatcherManager: trigger_event(event, config, *hook_args, **hook_kwargs)
activate MatcherManager
MatcherManager->>EventRegistry: get_handlers(event.get_event_type())
EventRegistry-->>MatcherManager: handlers_by_priority
loop priorities
MatcherManager->>MatcherManager: _simple_run(matcher_list,...)
loop matcher in priority
MatcherManager->>MatcherManager: prepare session_args, session_kwargs
MatcherManager->>MatcherManager: collect runtime DependsFactory in args/kwargs
alt has runtime dependencies
MatcherManager->>DependsFactory: resolve(*session_args, **session_kwargs) concurrent
DependsFactory-->>MatcherManager: resolved values or None
alt any result is None
MatcherManager-->>MatcherManager: runtime resolve failed
MatcherManager-->>MatcherManager: raise RuntimeError
end
end
MatcherManager->>MatcherManager: _resolve_dependencies(signature, session_args, session_kwargs)
alt dependencies resolvable
MatcherManager->>DependsFactory: resolve(... d_kw ...) concurrent
DependsFactory-->>MatcherManager: injected kwargs or None
alt any dependency None
MatcherManager-->>MatcherManager: skip this handler
else all ok
MatcherManager->>Handler: await handler(*new_args, **f_kwargs)
alt handler raises PassException
MatcherManager-->>MatcherManager: continue to next handler
else handler raises CancelException or BlockException
MatcherManager-->>MatcherManager: stop processing, return False
else success
MatcherManager-->>MatcherManager: if matcher.block then stop
end
end
else not resolvable
MatcherManager-->>MatcherManager: log warning and skip handler
end
end
end
MatcherManager-->>ChatObject: return
deactivate MatcherManager
Sequence diagram for preset fallback retry flowsequenceDiagram
actor User
participant ChatObject
participant Adapter as ChatAdapter
participant MatcherManager
participant FallbackHandler
User->>ChatObject: send_message()
ChatObject->>ChatObject: build PreCompletionEvent and others
ChatObject->>ChatObject: _process_chat()
ChatObject->>ChatObject: get_context_messages()
ChatObject->>ChatObject: send_messages = context.unwrap()
loop term in 1..config.llm.max_fallbacks
ChatObject->>ChatObject: used_preset.add(preset.name)
ChatObject->>Adapter: call_completion(send_messages, config, preset)
alt adapter succeeds
Adapter-->>ChatObject: UniResponse chunks and text
ChatObject->>ChatObject: response = final UniResponse
ChatObject-->>User: stream chunks via yield_response
%% break
else adapter raises Exception e
Adapter-->>ChatObject: Exception e
ChatObject->>ChatObject: log warning with e and term
ChatObject->>MatcherManager: trigger_event(FallbackContext(preset, e, config, context_wrap, term), config, exception_ignored=(FallbackFailed,))
MatcherManager->>FallbackHandler: invoke on_preset_fallback handler
alt handler updates event.preset
FallbackHandler-->>MatcherManager: return
MatcherManager-->>ChatObject: return
ChatObject->>ChatObject: preset = event.preset
else handler calls event.fail(reason)
FallbackHandler-->>MatcherManager: raise FallbackFailed
MatcherManager-->>ChatObject: propagate FallbackFailed
ChatObject-->>User: error
%% break
end
end
end
alt no successful attempt
ChatObject-->>User: raise FallbackFailed("Max preset fallbacks retries exceeded.")
end
Class diagram for matcher, dependency injection, and event registryclassDiagram
class EventRegistry {
+defaultdict~str, defaultdict~int, list~FunctionData~~ _event_handlers
+register_handler(event_type: str, data: FunctionData)
+get_handlers(event_type: str) defaultdict~int, list~FunctionData~~
+get_all() defaultdict~str, defaultdict~int, list~FunctionData~~~~
+_all() defaultdict~str, defaultdict~int, list~FunctionData~~~~
}
class FunctionData {
+Callable[..., Awaitable~Any~~] function
+inspect.Signature signature
+FrameType frame
+int priority
+Matcher matcher
}
class Matcher {
+EventTypeEnum|str event_type
+int priority
+bool block
+__call__(*args, **kwargs) Callable
+append_handler(func: Callable[..., Awaitable~Any~~])
+stop_process()
+cancel_matcher()
+pass_event()
+set_block(block: bool)
}
class DependsFactory~T~ {
-Callable[..., T|Awaitable~T~~] _depency_func
+DependsFactory(depency: Callable[..., T|Awaitable~T~~])
+resolve(*args, **kwargs) T|None
}
class MatcherFactory {
+_resolve_dependencies(signature: inspect.Signature, session_args: Iterable~Any~, session_kwargs: dict~str, Any~) (bool, tuple, dict~str, Any~, dict~str, DependsFactory~~)
+_do_runtime_resolve(runtime_args: dict~int, DependsFactory~~, runtime_kwargs: dict~str, DependsFactory~~, args2update: list~Any~, kwargs2update: dict~str, Any~, session_args: list~Any~, session_kwargs: dict~str, Any~, exception_ignored: tuple~type~BaseException~~, ...~) bool
+_simple_run(matcher_list: list~FunctionData~, event: BaseEvent, config: AmritaConfig, exception_ignored: tuple~type~BaseException~~, ... , extra_args: tuple, extra_kwargs: dict~str, Any~) bool
+trigger_event(event: BaseEvent, config: AmritaConfig, *args, exception_ignored: tuple~type~Exception~~, **kwargs) None
}
class ChatException {
}
class MatcherException {
}
class BlockException {
}
class CancelException {
}
class PassException {
}
MatcherManager <|-- MatcherFactory
EventRegistry o--> FunctionData
FunctionData --> Matcher
Matcher ..> EventRegistry : registers
DependsFactory ..> MatcherFactory : used by
MatcherFactory ..> EventRegistry : reads
ChatException <|-- MatcherException
MatcherException <|-- BlockException
MatcherException <|-- CancelException
MatcherException <|-- PassException
ChatException <.. MatcherException : type alias
Class diagram for events, fallback context, and configclassDiagram
class EventTypeEnum {
<<enumeration>>
PRESET_FALLBACK
COMPLETION
MESSAGE
MEMORY
Nil
BEFORE_COMPLETION
validate(name: str) bool
}
class BaseEvent {
<<abstract>>
+get_event_type() EventTypeEnum|str
+event_type EventTypeEnum|str
}
class Event {
+USER_INPUT user_input
+ChatObject chat_object
+get_event_type() EventTypeEnum|str
+event_type EventTypeEnum|str
}
class FallbackContext {
+ModelPreset preset
+BaseException exc_info
+AmritaConfig config
+SendMessageWrap context
+int term
-EventTypeEnum _event_type
+get_event_type() EventTypeEnum
+event_type EventTypeEnum
+fail(reason: Any|None) Never
}
class FallbackFailed {
+FallbackFailed(*value: object)
}
class AmritaConfig {
+LLMConfig llm
}
class LLMConfig {
+bool auto_retry
+int max_retries
+int max_fallbacks
+bool enable_memory_abstract
}
class ModelPreset {
+str name
+str model
+str api_key
+str base_url
+ModelConfig config
}
class ModelConfig {
+bool stream
+bool support_tool
+bool support_vision
+bool cot_model
}
class SendMessageWrap {
}
class ChatObject {
+ModelPreset preset
+AmritaConfig config
+SendMessageWrap context_wrap
}
BaseEvent <|-- Event
BaseEvent <|-- FallbackContext
EventTypeEnum <.. BaseEvent
FallbackContext ..> ModelPreset
FallbackContext ..> AmritaConfig
FallbackContext ..> SendMessageWrap
FallbackContext ..> FallbackFailed : fail()
FallbackFailed --|> RuntimeError
AmritaConfig --> LLMConfig
ModelPreset --> ModelConfig
ChatObject --> ModelPreset
ChatObject --> AmritaConfig
ChatObject --> SendMessageWrap
Class diagram for ModelAdapter, OpenAIAdapter, and AnthropicAdapterclassDiagram
class ModelAdapter {
<<abstract>>
+ModelPreset preset
+AmritaConfig config
+__init__(preset: ModelPreset, config: AmritaConfig)
+call_api(messages: Iterable, **kwargs) AsyncGenerator~COMPLETION_RETURNING, None~
+call_tools(tools, *args, **kwargs) AsyncGenerator~Any, None~
+get_adapter_protocol() str|tuple~str, ...~
}
class OpenAIAdapter {
+call_api(messages: Iterable~ChatCompletionMessageParam~, *args, **kwargs) AsyncGenerator~COMPLETION_RETURNING, None~
+call_tools(tools: Iterable, *args, **kwargs) AsyncGenerator~Any, None~
+get_adapter_protocol() str|tuple~str, ...~
}
class AnthropicAdapter {
+call_api(messages: Iterable, *args, **kwargs) AsyncGenerator~COMPLETION_RETURNING, None~
+get_adapter_protocol() str|tuple~str, ...~
}
class ModelPreset {
+str name
+str model
+str api_key
+str base_url
+ModelConfig config
}
class ModelConfig {
+bool stream
+bool support_tool
+bool support_vision
+bool cot_model
}
class AmritaConfig {
+LLMConfig llm
}
class LLMConfig {
+int max_tokens
+int max_retries
+int max_fallbacks
+float temperature
+float top_p
}
class UniResponseUsage~T_INT~ {
+T_INT prompt_tokens
+T_INT completion_tokens
+T_INT total_tokens
}
class UniResponse {
+str|None role
+str content
+UniResponseUsage~int~ usage
+Any tool_calls
}
ModelAdapter <|-- OpenAIAdapter
ModelAdapter <|-- AnthropicAdapter
ModelAdapter --> ModelPreset
ModelAdapter --> AmritaConfig
ModelPreset --> ModelConfig
AmritaConfig --> LLMConfig
OpenAIAdapter ..> UniResponse
AnthropicAdapter ..> UniResponse
UniResponse --> UniResponseUsage~int~
File-Level Changes
Assessment against linked issues
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
|
Sorry @JohnRichard4096, you have reached your weekly rate limit of 500000 diff characters. Please try again later or upgrade to continue using Sourcery |
There was a problem hiding this comment.
Hey - I've found 4 issues, and left some high level feedback:
- In
model_dump, the list comprehension reuses the parameter name (for obj in obj), which both shadows the outer variable and is confusing; use a different loop variable (e.g.for item in obj) to avoid bugs and improve readability. - The
T_INTTypeVar intypes.pyis declared asTypeVar("T_INT", int, None, int | None), which mixes duplicate and incompatible constraints; this will fail at import time and should be simplified to valid constraints (e.g.TypeVar("T_INT", int, None)or a proper bound). - The
exception_ignoredargument forMatcherFactory.trigger_eventis now a keyword-only parameter, butChatObject._process_chatcallsMatcherManager.trigger_event(ctx, ctx.config, (FallbackFailed,))positionally, so the exceptions won't actually be treated as ignored; update the call site toexception_ignored=(FallbackFailed,).
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- In `model_dump`, the list comprehension reuses the parameter name (`for obj in obj`), which both shadows the outer variable and is confusing; use a different loop variable (e.g. `for item in obj`) to avoid bugs and improve readability.
- The `T_INT` TypeVar in `types.py` is declared as `TypeVar("T_INT", int, None, int | None)`, which mixes duplicate and incompatible constraints; this will fail at import time and should be simplified to valid constraints (e.g. `TypeVar("T_INT", int, None)` or a proper bound).
- The `exception_ignored` argument for `MatcherFactory.trigger_event` is now a keyword-only parameter, but `ChatObject._process_chat` calls `MatcherManager.trigger_event(ctx, ctx.config, (FallbackFailed,))` positionally, so the exceptions won't actually be treated as ignored; update the call site to `exception_ignored=(FallbackFailed,)`.
## Individual Comments
### Comment 1
<location path="src/amrita_core/hook/matcher.py" line_range="448-449" />
<code_context>
+ elif isinstance(chunk, MessageContent | str):
+ await self.yield_response(chunk)
+ break
+ except Exception as e:
+ logger.warning(
+ f"Because of `{e!s}`, LLM request failed, retrying ({i}/{self.config.llm.max_retries})..."
</code_context>
<issue_to_address>
**issue (bug_risk):** Using `CancelException | BlockException` in `isinstance` is unsafe and will raise a `TypeError`.
Here, using a PEP 604 union (`CancelException | BlockException`) as the second argument to `isinstance` will always raise `TypeError` at runtime, breaking the exception handling path. Please change this to `isinstance(e, (CancelException, BlockException))` so the check works correctly.
</issue_to_address>
### Comment 2
<location path="src/amrita_core/chatmanager.py" line_range="840-841" />
<code_context>
+ logger.warning(
+ f"Because of `{e!s}`, LLM request failed, retrying ({i}/{self.config.llm.max_retries})..."
+ )
+ ctx = FallbackContext(self.preset, e, self.config, self.context_wrap, i)
+ await MatcherManager.trigger_event(ctx, ctx.config, (FallbackFailed,))
+ if ctx.preset is self.preset:
+ ctx.fail("No preset fallback available, exiting!")
</code_context>
<issue_to_address>
**issue (bug_risk):** `FallbackFailed` is passed positionally and never reaches `exception_ignored` in `trigger_event`.
Here `(FallbackFailed,)` goes into `*args`, so `exception_ignored` stays as its default `()`. That means `FallbackFailed` is not actually treated as an ignored/propagated exception. If you intend it to be ignored, call this as `await MatcherManager.trigger_event(ctx, ctx.config, exception_ignored=(FallbackFailed,))` (or via `**kwargs`).
</issue_to_address>
### Comment 3
<location path="src/amrita_core/chatmanager.py" line_range="837-839" />
<code_context>
+ await self.yield_response(chunk)
+ break
+ except Exception as e:
+ logger.warning(
+ f"Because of `{e!s}`, LLM request failed, retrying ({i}/{self.config.llm.max_retries})..."
+ )
+ ctx = FallbackContext(self.preset, e, self.config, self.context_wrap, i)
</code_context>
<issue_to_address>
**suggestion:** Retry log message uses `max_retries` while control flow is governed by `max_fallbacks`.
Within the preset fallback loop (`for i in range(1, self.config.llm.max_fallbacks + 1)`), the warning logs `({i}/{self.config.llm.max_retries})`, which is misleading because the loop is actually bounded by `max_fallbacks`. Please either use `max_fallbacks` in the log or reword it to clearly distinguish network retries from preset fallbacks.
```suggestion
logger.warning(
f"Because of `{e!s}`, LLM request failed, retrying preset fallback ({i}/{self.config.llm.max_fallbacks})..."
)
```
</issue_to_address>
### Comment 4
<location path="tests/test_utils.py" line_range="22-24" />
<code_context>
- assert "00:00:00" in result or "AM" in result
-
-
def test_split_list():
lst = [1, 2, 3, 4, 5, 6]
assert split_list(lst, 2) == [[1, 2], [3, 4], [5, 6]]
</code_context>
<issue_to_address>
**suggestion (testing):** Add edge-case coverage for `split_list` (threshold 1, equal to length, and greater than length).
The current `test_split_list` only covers the evenly divisible case. To better exercise the new generic implementation, please also test:
- `threshold == len(lst)` → returns `[lst]`
- `threshold > len(lst)` → returns `[lst]`
- `threshold == 1` → returns a list of single-element lists
These cases help detect off-by-one issues and future regressions in the split logic.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| except Exception as e: | ||
| if isinstance(e, CancelException | BlockException): |
There was a problem hiding this comment.
issue (bug_risk): Using CancelException | BlockException in isinstance is unsafe and will raise a TypeError.
Here, using a PEP 604 union (CancelException | BlockException) as the second argument to isinstance will always raise TypeError at runtime, breaking the exception handling path. Please change this to isinstance(e, (CancelException, BlockException)) so the check works correctly.
| ctx = FallbackContext(self.preset, e, self.config, self.context_wrap, i) | ||
| await MatcherManager.trigger_event(ctx, ctx.config, (FallbackFailed,)) |
There was a problem hiding this comment.
issue (bug_risk): FallbackFailed is passed positionally and never reaches exception_ignored in trigger_event.
Here (FallbackFailed,) goes into *args, so exception_ignored stays as its default (). That means FallbackFailed is not actually treated as an ignored/propagated exception. If you intend it to be ignored, call this as await MatcherManager.trigger_event(ctx, ctx.config, exception_ignored=(FallbackFailed,)) (or via **kwargs).
| logger.warning( | ||
| f"Because of `{e!s}`, LLM request failed, retrying ({i}/{self.config.llm.max_retries})..." | ||
| ) |
There was a problem hiding this comment.
suggestion: Retry log message uses max_retries while control flow is governed by max_fallbacks.
Within the preset fallback loop (for i in range(1, self.config.llm.max_fallbacks + 1)), the warning logs ({i}/{self.config.llm.max_retries}), which is misleading because the loop is actually bounded by max_fallbacks. Please either use max_fallbacks in the log or reword it to clearly distinguish network retries from preset fallbacks.
| logger.warning( | |
| f"Because of `{e!s}`, LLM request failed, retrying ({i}/{self.config.llm.max_retries})..." | |
| ) | |
| logger.warning( | |
| f"Because of `{e!s}`, LLM request failed, retrying preset fallback ({i}/{self.config.llm.max_fallbacks})..." | |
| ) |
| def test_split_list(): | ||
| lst = [1, 2, 3, 4, 5, 6] | ||
| assert split_list(lst, 2) == [[1, 2], [3, 4], [5, 6]] |
There was a problem hiding this comment.
suggestion (testing): Add edge-case coverage for split_list (threshold 1, equal to length, and greater than length).
The current test_split_list only covers the evenly divisible case. To better exercise the new generic implementation, please also test:
threshold == len(lst)→ returns[lst]threshold > len(lst)→ returns[lst]threshold == 1→ returns a list of single-element lists
These cases help detect off-by-one issues and future regressions in the split logic.
close #22
Summary by Sourcery
Introduce a dependency injection–aware matcher/event system with preset fallback handling, Anthropic adapter support, and chain-of-thought filtering, while updating configuration and docs accordingly.
New Features:
Bug Fixes:
Enhancements:
Build:
Documentation:
Tests: