Conversation
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a significant enhancement to how reasoning content from large language models is presented to users on the Lark platform. By integrating a new Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
There was a problem hiding this comment.
Hey - I've found 2 issues, and left some high level feedback:
- The reasoning card construction logic is duplicated between
_build_reasoning_collapsible_paneland_build_reasoning_card; consider extracting a shared helper for building a single collapsible panel (with shared colors, padding, defaults, etc.) to avoid divergence in future changes. - _build_reasoning_card
currently aborts and returnsNoneon the first non-Json/Plain` component, which silently disables the card path even if other valid reasoning markers exist; consider either skipping unsupported components or making this behavior explicit (e.g., via logging) so it's easier to understand why a card wasn’t generated. - There are several hard-coded magic values related to Lark UI (e.g.,
"grey"colors, padding/margin strings,"lark_collapsible_panel_reasoning"type,"💭 Thinking"title, and the platform name string"lark"); introducing constants or enums for these would make the behavior easier to tweak and reduce the risk of typos.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The reasoning card construction logic is duplicated between `_build_reasoning_collapsible_panel` and `_build_reasoning_card`; consider extracting a shared helper for building a single collapsible panel (with shared colors, padding, defaults, etc.) to avoid divergence in future changes.
- _build_reasoning_card` currently aborts and returns `None` on the first non-`Json`/`Plain` component, which silently disables the card path even if other valid reasoning markers exist; consider either skipping unsupported components or making this behavior explicit (e.g., via logging) so it's easier to understand why a card wasn’t generated.
- There are several hard-coded magic values related to Lark UI (e.g., `"grey"` colors, padding/margin strings, `"lark_collapsible_panel_reasoning"` type, `"💭 Thinking"` title, and the platform name string `"lark"`); introducing constants or enums for these would make the behavior easier to tweak and reduce the risk of typos.
## Individual Comments
### Comment 1
<location path="astrbot/core/pipeline/result_decorate/stage.py" line_range="277-279" />
<code_context>
# inject reasoning content to chain
- reasoning_content = event.get_extra("_llm_reasoning_content")
- result.chain.insert(0, Plain(f"🤔 思考: {reasoning_content}\n"))
+ reasoning_content = str(event.get_extra("_llm_reasoning_content"))
+ if event.get_platform_name() == "lark":
+ result.chain.insert(
</code_context>
<issue_to_address>
**suggestion:** Guard against `None` or empty reasoning when building the Lark panel marker.
Casting `event.get_extra("_llm_reasoning_content")` with `str(...)` turns `None` into the literal "None" and still inserts a marker for empty/whitespace-only content. Since downstream logic ignores empty `content`, this leaves you with a no-op component. Consider normalizing first, e.g. `raw = event.get_extra(...); reasoning_content = (raw or "").strip()`, and only inserting the reasoning component when `reasoning_content` is non-empty to avoid spurious markers and "None" text.
```suggestion
# inject reasoning content to chain
reasoning_content = (event.get_extra("_llm_reasoning_content") or "").strip()
if reasoning_content and event.get_platform_name() == "lark":
```
</issue_to_address>
### Comment 2
<location path="astrbot/core/platform/sources/lark/lark_event.py" line_range="283" />
<code_context>
ret.append(_stage)
return ret
+ @staticmethod
+ def _build_reasoning_collapsible_panel(reasoning_content: str, title: str) -> dict:
+ return {
</code_context>
<issue_to_address>
**issue (complexity):** Consider refactoring the new reasoning panel and streaming logic by centralizing panel/card construction and extracting reasoning-specific and card-id checks into small helpers to keep `send_message_chain` and the streaming loop simple and readable.
You can keep all current behavior but reduce the new complexity by centralizing the panel/card building and pulling the reasoning-handling flow out of `send_message_chain`.
### 1. Deduplicate panel JSON construction
Right now `_build_reasoning_collapsible_panel` and `_build_reasoning_card` hardcode very similar structures. You can centralize the panel element creation and reuse it both places:
```python
@staticmethod
def _build_reasoning_panel_element(
content: str,
title: str = "💭 Thinking",
expanded: bool = False,
) -> dict:
return {
"tag": "collapsible_panel",
"expanded": bool(expanded),
"background_color": "grey",
"padding": "8px 8px 8px 8px",
"margin": "4px 0px 4px 0px",
"border": {
"color": "grey",
"corner_radius": "6px",
},
"header": {
"title": {
"tag": "plain_text",
"content": title,
},
"background_color": "grey",
},
"elements": [
{"tag": "markdown", "content": content},
],
}
```
Then:
```python
@staticmethod
def _build_reasoning_collapsible_panel(reasoning_content: str, title: str) -> dict:
return {
"schema": "2.0",
"body": {
"elements": [
LarkMessageEvent._build_reasoning_panel_element(
content=reasoning_content,
title=title,
expanded=False,
),
],
},
}
```
And `_build_reasoning_card` becomes simpler and consistent:
```python
@staticmethod
def _build_reasoning_card(message_chain: MessageChain) -> dict | None:
elements: list[dict] = []
for comp in message_chain.chain:
if isinstance(comp, Json) and isinstance(comp.data, dict):
if comp.data.get("type") != "lark_collapsible_panel_reasoning":
continue
reasoning_content = str(comp.data.get("content", "")).strip()
if not reasoning_content:
continue
elements.append(
LarkMessageEvent._build_reasoning_panel_element(
content=reasoning_content,
title=str(comp.data.get("title", "💭 Thinking")),
expanded=bool(comp.data.get("expanded", False)),
),
)
elif isinstance(comp, Plain):
if comp.text:
elements.append({"tag": "markdown", "content": comp.text})
else:
return None
return {"schema": "2.0", "body": {"elements": elements}} if elements else None
```
This removes the duplicated dict literals and keeps styling changes in one place.
### 2. Extract reasoning handling out of `send_message_chain`
The buffering + per-component reasoning handling makes `send_message_chain` harder to follow and duplicates marker handling semantics with the early-card branch. You can move “other components + reasoning markers” into a dedicated helper and call it from `send_message_chain`.
For example:
```python
@staticmethod
async def _send_other_components_with_reasoning(
components: list,
lark_client: lark.Client,
reply_message_id: str | None,
receive_id: str | None,
receive_id_type: str | None,
) -> None:
buffered_components: list = []
async def _flush_buffer() -> None:
nonlocal buffered_components
if not buffered_components:
return
pending_chain = MessageChain()
pending_chain.chain = buffered_components
buffered_components = []
res = await LarkMessageEvent._convert_to_lark(pending_chain, lark_client)
if not res:
return
wrapped = {
"zh_cn": {
"title": "",
"content": res,
},
}
await LarkMessageEvent._send_im_message(
lark_client,
content=json.dumps(wrapped),
msg_type="post",
reply_message_id=reply_message_id,
receive_id=receive_id,
receive_id_type=receive_id_type,
)
for comp in components:
if isinstance(comp, Json) and isinstance(comp.data, dict):
if comp.data.get("type") == "lark_collapsible_panel_reasoning":
await _flush_buffer()
reason_text = str(comp.data.get("content", "")).strip()
if reason_text:
panel_title = str(comp.data.get("title", "💭 Thinking"))
success = await LarkMessageEvent._send_collapsible_reasoning_panel(
reasoning_content=reason_text,
title=panel_title,
lark_client=lark_client,
reply_message_id=reply_message_id,
receive_id=receive_id,
receive_id_type=receive_id_type,
)
if not success:
buffered_components.append(
Plain(f"🤔 {panel_title}: {reason_text}"),
)
continue
buffered_components.append(comp)
await _flush_buffer()
```
Then `send_message_chain` is reduced back to a high-level coordinator:
```python
if other_components:
await LarkMessageEvent._send_other_components_with_reasoning(
other_components,
lark_client,
reply_message_id,
receive_id,
receive_id_type,
)
```
From there, you can optionally decide whether the early `has_reasoning_marker` fast-path still adds enough value to justify the second code path; if not, this helper can become the single source of reasoning behavior.
### 3. Encapsulate the `card_id` guard in streaming
The added `card_id` checks are correct but add more branching to the loop. A tiny helper keeps the loop readable:
```python
async def _safe_update_streaming_text(
card_id: str | None,
text: str | None,
sequence: int,
) -> bool:
if not card_id or not text:
return False
return await self._update_streaming_text(card_id, text, sequence)
```
Usage:
```python
snapshot = delta
if snapshot and snapshot != last_sent:
sequence += 1
ok = await self._safe_update_streaming_text(card_id, snapshot, sequence)
if ok:
last_sent = snapshot
```
Similarly for `_flush_and_close_card`:
```python
async def _flush_and_close_card() -> None:
if not card_id:
return
nonlocal sequence
if delta and delta != last_sent:
sequence += 1
await self._update_streaming_text(card_id, delta, sequence)
sequence += 1
await self._close_streaming_mode(card_id, sequence)
```
This keeps functionality intact while making the core streaming loop and `send_message_chain` easier to read and maintain.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| # inject reasoning content to chain | ||
| reasoning_content = event.get_extra("_llm_reasoning_content") | ||
| result.chain.insert(0, Plain(f"🤔 思考: {reasoning_content}\n")) | ||
| reasoning_content = str(event.get_extra("_llm_reasoning_content")) | ||
| if event.get_platform_name() == "lark": |
There was a problem hiding this comment.
suggestion: Guard against None or empty reasoning when building the Lark panel marker.
Casting event.get_extra("_llm_reasoning_content") with str(...) turns None into the literal "None" and still inserts a marker for empty/whitespace-only content. Since downstream logic ignores empty content, this leaves you with a no-op component. Consider normalizing first, e.g. raw = event.get_extra(...); reasoning_content = (raw or "").strip(), and only inserting the reasoning component when reasoning_content is non-empty to avoid spurious markers and "None" text.
| # inject reasoning content to chain | |
| reasoning_content = event.get_extra("_llm_reasoning_content") | |
| result.chain.insert(0, Plain(f"🤔 思考: {reasoning_content}\n")) | |
| reasoning_content = str(event.get_extra("_llm_reasoning_content")) | |
| if event.get_platform_name() == "lark": | |
| # inject reasoning content to chain | |
| reasoning_content = (event.get_extra("_llm_reasoning_content") or "").strip() | |
| if reasoning_content and event.get_platform_name() == "lark": |
There was a problem hiding this comment.
Code Review
This pull request introduces support for collapsible reasoning panels for the Lark platform and improves message handling. The implementation correctly uses a Json message component to trigger the special rendering on Lark and includes robust logic in lark_event.py to handle various message combinations, including fallbacks. The addition of checks for card_id in the streaming logic is a good improvement for stability.
My review includes a couple of suggestions to improve maintainability by refactoring duplicated code and replacing magic strings with constants. Overall, this is a solid contribution that adds a nice platform-specific feature.
| "type": "lark_collapsible_panel_reasoning", | ||
| "title": "💭 Thinking", |
There was a problem hiding this comment.
The values for type ("lark_collapsible_panel_reasoning") and the default title ("💭 Thinking") are hardcoded here and also in astrbot/core/platform/sources/lark/lark_event.py. Using these 'magic strings' across multiple files can lead to inconsistencies and makes the code harder to maintain.
It's recommended to define these as constants in a shared module and import them where needed. This ensures consistency and makes future updates easier.
For example, you could create a constants file or add to an existing one:
# In a shared constants module
LARK_REASONING_PANEL_TYPE = "lark_collapsible_panel_reasoning"
LARK_REASONING_DEFAULT_TITLE = "💭 Thinking"| def _build_reasoning_collapsible_panel(reasoning_content: str, title: str) -> dict: | ||
| return { | ||
| "schema": "2.0", | ||
| "body": { | ||
| "elements": [ | ||
| { | ||
| "tag": "collapsible_panel", | ||
| "expanded": False, | ||
| "background_color": "grey", | ||
| "padding": "8px 8px 8px 8px", | ||
| "margin": "4px 0px 4px 0px", | ||
| "border": { | ||
| "color": "grey", | ||
| "corner_radius": "6px", | ||
| }, | ||
| "header": { | ||
| "title": { | ||
| "tag": "plain_text", | ||
| "content": title, | ||
| }, | ||
| "background_color": "grey", | ||
| }, | ||
| "elements": [ | ||
| {"tag": "markdown", "content": reasoning_content}, | ||
| ], | ||
| } | ||
| ] | ||
| }, | ||
| } | ||
|
|
||
| @staticmethod | ||
| def _build_reasoning_card(message_chain: MessageChain) -> dict | None: | ||
| elements: list[dict] = [] | ||
| for comp in message_chain.chain: | ||
| if isinstance(comp, Json) and isinstance(comp.data, dict): | ||
| if comp.data.get("type") != "lark_collapsible_panel_reasoning": | ||
| continue | ||
| reasoning_content = str(comp.data.get("content", "")).strip() | ||
| if not reasoning_content: | ||
| continue | ||
| elements.append( | ||
| { | ||
| "tag": "collapsible_panel", | ||
| "expanded": bool(comp.data.get("expanded", False)), | ||
| "background_color": "grey", | ||
| "padding": "8px 8px 8px 8px", | ||
| "margin": "4px 0px 4px 0px", | ||
| "border": { | ||
| "color": "grey", | ||
| "corner_radius": "6px", | ||
| }, | ||
| "header": { | ||
| "title": { | ||
| "tag": "plain_text", | ||
| "content": str( | ||
| comp.data.get("title", "💭 Thinking"), | ||
| ), | ||
| }, | ||
| "background_color": "grey", | ||
| }, | ||
| "elements": [ | ||
| { | ||
| "tag": "markdown", | ||
| "content": reasoning_content, | ||
| } | ||
| ], | ||
| } | ||
| ) |
There was a problem hiding this comment.
There is significant code duplication between _build_reasoning_collapsible_panel and _build_reasoning_card when creating the collapsible_panel JSON structure. This makes the code harder to maintain, as any change to the panel structure needs to be applied in two places.
To improve this, you could extract the common logic for building a panel element into a new helper method. For example:
@staticmethod
def _build_collapsible_panel_element(content: str, title: str, expanded: bool) -> dict:
return {
"tag": "collapsible_panel",
"expanded": expanded,
"background_color": "grey",
"padding": "8px 8px 8px 8px",
"margin": "4px 0px 4px 0px",
"border": {
"color": "grey",
"corner_radius": "6px",
},
"header": {
"title": {
"tag": "plain_text",
"content": title,
},
"background_color": "grey",
},
"elements": [
{"tag": "markdown", "content": content},
],
}Then, both _build_reasoning_collapsible_panel and _build_reasoning_card can be simplified by calling this new method. This will make the code cleaner and easier to manage.
…ity and maintainability
Modifications / 改动点
Screenshots or Test Results / 运行截图或测试结果
Checklist / 检查清单
😊 If there are new features added in the PR, I have discussed it with the authors through issues/emails, etc.
/ 如果 PR 中有新加入的功能,已经通过 Issue / 邮件等方式和作者讨论过。
👀 My changes have been well-tested, and "Verification Steps" and "Screenshots" have been provided above.
/ 我的更改经过了良好的测试,并已在上方提供了“验证步骤”和“运行截图”。
🤓 I have ensured that no new dependencies are introduced, OR if new dependencies are introduced, they have been added to the appropriate locations in
requirements.txtandpyproject.toml./ 我确保没有引入新依赖库,或者引入了新依赖库的同时将其添加到
requirements.txt和pyproject.toml文件相应位置。😮 My changes do not introduce malicious code.
/ 我的更改没有引入恶意代码。
Summary by Sourcery
Add Lark-specific support for rendering LLM reasoning in collapsible interactive panels and improve streaming card update handling.
New Features:
Bug Fixes:
Enhancements: