Skip to content

feat(lark): add collapsible reasoning panel support and enhance message handling#6831

Merged
Soulter merged 2 commits intomasterfrom
feat/lark-collapsible-card-reasoning
Mar 23, 2026
Merged

feat(lark): add collapsible reasoning panel support and enhance message handling#6831
Soulter merged 2 commits intomasterfrom
feat/lark-collapsible-card-reasoning

Conversation

@Soulter
Copy link
Member

@Soulter Soulter commented Mar 23, 2026

Modifications / 改动点

  • This is NOT a breaking change. / 这不是一个破坏性变更。

Screenshots or Test Results / 运行截图或测试结果


Checklist / 检查清单

  • 😊 If there are new features added in the PR, I have discussed it with the authors through issues/emails, etc.
    / 如果 PR 中有新加入的功能,已经通过 Issue / 邮件等方式和作者讨论过。

  • 👀 My changes have been well-tested, and "Verification Steps" and "Screenshots" have been provided above.
    / 我的更改经过了良好的测试,并已在上方提供了“验证步骤”和“运行截图”

  • 🤓 I have ensured that no new dependencies are introduced, OR if new dependencies are introduced, they have been added to the appropriate locations in requirements.txt and pyproject.toml.
    / 我确保没有引入新依赖库,或者引入了新依赖库的同时将其添加到 requirements.txtpyproject.toml 文件相应位置。

  • 😮 My changes do not introduce malicious code.
    / 我的更改没有引入恶意代码。

Summary by Sourcery

Add Lark-specific support for rendering LLM reasoning in collapsible interactive panels and improve streaming card update handling.

New Features:

  • Render LLM reasoning content as Lark interactive collapsible panels instead of plain text for Lark platform messages.

Bug Fixes:

  • Guard streaming text update and final flush logic with card existence checks to avoid invalid update attempts.

Enhancements:

  • Preserve message component order while mixing collapsible reasoning panels with regular content in Lark message sending.
  • Avoid updating or closing streaming cards when no card_id is available to prevent erroneous operations.

@auto-assign auto-assign bot requested review from Fridemn and LIghtJUNction March 23, 2026 06:58
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a significant enhancement to how reasoning content from large language models is presented to users on the Lark platform. By integrating a new Json message component and dedicated handling logic, the system can now display complex reasoning in an interactive, collapsible panel, improving the clarity and organization of information. The changes also refine the overall message sending mechanism for Lark, ensuring that various message types are processed and displayed in the intended sequence.

Highlights

  • Lark Collapsible Reasoning Panel: Implemented support for displaying LLM reasoning content within a collapsible panel specifically for the Lark platform, enhancing readability and user experience.
  • Enhanced Message Handling: Refactored the message sending logic for Lark to properly handle and sequence different message components, including interactive cards and plain text, ensuring correct display order.
  • New JSON Message Component: Introduced a new Json message component to encapsulate structured data, enabling more flexible and platform-specific message formatting, such as the Lark collapsible panel.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 2 issues, and left some high level feedback:

  • The reasoning card construction logic is duplicated between _build_reasoning_collapsible_panel and _build_reasoning_card; consider extracting a shared helper for building a single collapsible panel (with shared colors, padding, defaults, etc.) to avoid divergence in future changes.
  • _build_reasoning_cardcurrently aborts and returnsNone on the first non-Json/Plain` component, which silently disables the card path even if other valid reasoning markers exist; consider either skipping unsupported components or making this behavior explicit (e.g., via logging) so it's easier to understand why a card wasn’t generated.
  • There are several hard-coded magic values related to Lark UI (e.g., "grey" colors, padding/margin strings, "lark_collapsible_panel_reasoning" type, "💭 Thinking" title, and the platform name string "lark"); introducing constants or enums for these would make the behavior easier to tweak and reduce the risk of typos.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- The reasoning card construction logic is duplicated between `_build_reasoning_collapsible_panel` and `_build_reasoning_card`; consider extracting a shared helper for building a single collapsible panel (with shared colors, padding, defaults, etc.) to avoid divergence in future changes.
- _build_reasoning_card` currently aborts and returns `None` on the first non-`Json`/`Plain` component, which silently disables the card path even if other valid reasoning markers exist; consider either skipping unsupported components or making this behavior explicit (e.g., via logging) so it's easier to understand why a card wasn’t generated.
- There are several hard-coded magic values related to Lark UI (e.g., `"grey"` colors, padding/margin strings, `"lark_collapsible_panel_reasoning"` type, `"💭 Thinking"` title, and the platform name string `"lark"`); introducing constants or enums for these would make the behavior easier to tweak and reduce the risk of typos.

## Individual Comments

### Comment 1
<location path="astrbot/core/pipeline/result_decorate/stage.py" line_range="277-279" />
<code_context>
                 # inject reasoning content to chain
-                reasoning_content = event.get_extra("_llm_reasoning_content")
-                result.chain.insert(0, Plain(f"🤔 思考: {reasoning_content}\n"))
+                reasoning_content = str(event.get_extra("_llm_reasoning_content"))
+                if event.get_platform_name() == "lark":
+                    result.chain.insert(
</code_context>
<issue_to_address>
**suggestion:** Guard against `None` or empty reasoning when building the Lark panel marker.

Casting `event.get_extra("_llm_reasoning_content")` with `str(...)` turns `None` into the literal "None" and still inserts a marker for empty/whitespace-only content. Since downstream logic ignores empty `content`, this leaves you with a no-op component. Consider normalizing first, e.g. `raw = event.get_extra(...); reasoning_content = (raw or "").strip()`, and only inserting the reasoning component when `reasoning_content` is non-empty to avoid spurious markers and "None" text.

```suggestion
                # inject reasoning content to chain
                reasoning_content = (event.get_extra("_llm_reasoning_content") or "").strip()
                if reasoning_content and event.get_platform_name() == "lark":
```
</issue_to_address>

### Comment 2
<location path="astrbot/core/platform/sources/lark/lark_event.py" line_range="283" />
<code_context>
             ret.append(_stage)
         return ret

+    @staticmethod
+    def _build_reasoning_collapsible_panel(reasoning_content: str, title: str) -> dict:
+        return {
</code_context>
<issue_to_address>
**issue (complexity):** Consider refactoring the new reasoning panel and streaming logic by centralizing panel/card construction and extracting reasoning-specific and card-id checks into small helpers to keep `send_message_chain` and the streaming loop simple and readable.

You can keep all current behavior but reduce the new complexity by centralizing the panel/card building and pulling the reasoning-handling flow out of `send_message_chain`.

### 1. Deduplicate panel JSON construction

Right now `_build_reasoning_collapsible_panel` and `_build_reasoning_card` hardcode very similar structures. You can centralize the panel element creation and reuse it both places:

```python
@staticmethod
def _build_reasoning_panel_element(
    content: str,
    title: str = "💭 Thinking",
    expanded: bool = False,
) -> dict:
    return {
        "tag": "collapsible_panel",
        "expanded": bool(expanded),
        "background_color": "grey",
        "padding": "8px 8px 8px 8px",
        "margin": "4px 0px 4px 0px",
        "border": {
            "color": "grey",
            "corner_radius": "6px",
        },
        "header": {
            "title": {
                "tag": "plain_text",
                "content": title,
            },
            "background_color": "grey",
        },
        "elements": [
            {"tag": "markdown", "content": content},
        ],
    }
```

Then:

```python
@staticmethod
def _build_reasoning_collapsible_panel(reasoning_content: str, title: str) -> dict:
    return {
        "schema": "2.0",
        "body": {
            "elements": [
                LarkMessageEvent._build_reasoning_panel_element(
                    content=reasoning_content,
                    title=title,
                    expanded=False,
                ),
            ],
        },
    }
```

And `_build_reasoning_card` becomes simpler and consistent:

```python
@staticmethod
def _build_reasoning_card(message_chain: MessageChain) -> dict | None:
    elements: list[dict] = []

    for comp in message_chain.chain:
        if isinstance(comp, Json) and isinstance(comp.data, dict):
            if comp.data.get("type") != "lark_collapsible_panel_reasoning":
                continue

            reasoning_content = str(comp.data.get("content", "")).strip()
            if not reasoning_content:
                continue

            elements.append(
                LarkMessageEvent._build_reasoning_panel_element(
                    content=reasoning_content,
                    title=str(comp.data.get("title", "💭 Thinking")),
                    expanded=bool(comp.data.get("expanded", False)),
                ),
            )
        elif isinstance(comp, Plain):
            if comp.text:
                elements.append({"tag": "markdown", "content": comp.text})
        else:
            return None

    return {"schema": "2.0", "body": {"elements": elements}} if elements else None
```

This removes the duplicated dict literals and keeps styling changes in one place.

### 2. Extract reasoning handling out of `send_message_chain`

The buffering + per-component reasoning handling makes `send_message_chain` harder to follow and duplicates marker handling semantics with the early-card branch. You can move “other components + reasoning markers” into a dedicated helper and call it from `send_message_chain`.

For example:

```python
@staticmethod
async def _send_other_components_with_reasoning(
    components: list,
    lark_client: lark.Client,
    reply_message_id: str | None,
    receive_id: str | None,
    receive_id_type: str | None,
) -> None:
    buffered_components: list = []

    async def _flush_buffer() -> None:
        nonlocal buffered_components
        if not buffered_components:
            return

        pending_chain = MessageChain()
        pending_chain.chain = buffered_components
        buffered_components = []

        res = await LarkMessageEvent._convert_to_lark(pending_chain, lark_client)
        if not res:
            return

        wrapped = {
            "zh_cn": {
                "title": "",
                "content": res,
            },
        }
        await LarkMessageEvent._send_im_message(
            lark_client,
            content=json.dumps(wrapped),
            msg_type="post",
            reply_message_id=reply_message_id,
            receive_id=receive_id,
            receive_id_type=receive_id_type,
        )

    for comp in components:
        if isinstance(comp, Json) and isinstance(comp.data, dict):
            if comp.data.get("type") == "lark_collapsible_panel_reasoning":
                await _flush_buffer()
                reason_text = str(comp.data.get("content", "")).strip()
                if reason_text:
                    panel_title = str(comp.data.get("title", "💭 Thinking"))
                    success = await LarkMessageEvent._send_collapsible_reasoning_panel(
                        reasoning_content=reason_text,
                        title=panel_title,
                        lark_client=lark_client,
                        reply_message_id=reply_message_id,
                        receive_id=receive_id,
                        receive_id_type=receive_id_type,
                    )
                    if not success:
                        buffered_components.append(
                            Plain(f"🤔 {panel_title}: {reason_text}"),
                        )
                continue
        buffered_components.append(comp)

    await _flush_buffer()
```

Then `send_message_chain` is reduced back to a high-level coordinator:

```python
if other_components:
    await LarkMessageEvent._send_other_components_with_reasoning(
        other_components,
        lark_client,
        reply_message_id,
        receive_id,
        receive_id_type,
    )
```

From there, you can optionally decide whether the early `has_reasoning_marker` fast-path still adds enough value to justify the second code path; if not, this helper can become the single source of reasoning behavior.

### 3. Encapsulate the `card_id` guard in streaming

The added `card_id` checks are correct but add more branching to the loop. A tiny helper keeps the loop readable:

```python
async def _safe_update_streaming_text(
    card_id: str | None,
    text: str | None,
    sequence: int,
) -> bool:
    if not card_id or not text:
        return False
    return await self._update_streaming_text(card_id, text, sequence)
```

Usage:

```python
snapshot = delta
if snapshot and snapshot != last_sent:
    sequence += 1
    ok = await self._safe_update_streaming_text(card_id, snapshot, sequence)
    if ok:
        last_sent = snapshot
```

Similarly for `_flush_and_close_card`:

```python
async def _flush_and_close_card() -> None:
    if not card_id:
        return
    nonlocal sequence
    if delta and delta != last_sent:
        sequence += 1
        await self._update_streaming_text(card_id, delta, sequence)
    sequence += 1
    await self._close_streaming_mode(card_id, sequence)
```

This keeps functionality intact while making the core streaming loop and `send_message_chain` easier to read and maintain.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment on lines 277 to +279
# inject reasoning content to chain
reasoning_content = event.get_extra("_llm_reasoning_content")
result.chain.insert(0, Plain(f"🤔 思考: {reasoning_content}\n"))
reasoning_content = str(event.get_extra("_llm_reasoning_content"))
if event.get_platform_name() == "lark":
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: Guard against None or empty reasoning when building the Lark panel marker.

Casting event.get_extra("_llm_reasoning_content") with str(...) turns None into the literal "None" and still inserts a marker for empty/whitespace-only content. Since downstream logic ignores empty content, this leaves you with a no-op component. Consider normalizing first, e.g. raw = event.get_extra(...); reasoning_content = (raw or "").strip(), and only inserting the reasoning component when reasoning_content is non-empty to avoid spurious markers and "None" text.

Suggested change
# inject reasoning content to chain
reasoning_content = event.get_extra("_llm_reasoning_content")
result.chain.insert(0, Plain(f"🤔 思考: {reasoning_content}\n"))
reasoning_content = str(event.get_extra("_llm_reasoning_content"))
if event.get_platform_name() == "lark":
# inject reasoning content to chain
reasoning_content = (event.get_extra("_llm_reasoning_content") or "").strip()
if reasoning_content and event.get_platform_name() == "lark":

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for collapsible reasoning panels for the Lark platform and improves message handling. The implementation correctly uses a Json message component to trigger the special rendering on Lark and includes robust logic in lark_event.py to handle various message combinations, including fallbacks. The addition of checks for card_id in the streaming logic is a good improvement for stability.

My review includes a couple of suggestions to improve maintainability by refactoring duplicated code and replacing magic strings with constants. Overall, this is a solid contribution that adds a nice platform-specific feature.

Comment on lines +284 to +285
"type": "lark_collapsible_panel_reasoning",
"title": "💭 Thinking",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The values for type ("lark_collapsible_panel_reasoning") and the default title ("💭 Thinking") are hardcoded here and also in astrbot/core/platform/sources/lark/lark_event.py. Using these 'magic strings' across multiple files can lead to inconsistencies and makes the code harder to maintain.

It's recommended to define these as constants in a shared module and import them where needed. This ensures consistency and makes future updates easier.

For example, you could create a constants file or add to an existing one:

# In a shared constants module
LARK_REASONING_PANEL_TYPE = "lark_collapsible_panel_reasoning"
LARK_REASONING_DEFAULT_TITLE = "💭 Thinking"

Comment on lines +284 to +351
def _build_reasoning_collapsible_panel(reasoning_content: str, title: str) -> dict:
return {
"schema": "2.0",
"body": {
"elements": [
{
"tag": "collapsible_panel",
"expanded": False,
"background_color": "grey",
"padding": "8px 8px 8px 8px",
"margin": "4px 0px 4px 0px",
"border": {
"color": "grey",
"corner_radius": "6px",
},
"header": {
"title": {
"tag": "plain_text",
"content": title,
},
"background_color": "grey",
},
"elements": [
{"tag": "markdown", "content": reasoning_content},
],
}
]
},
}

@staticmethod
def _build_reasoning_card(message_chain: MessageChain) -> dict | None:
elements: list[dict] = []
for comp in message_chain.chain:
if isinstance(comp, Json) and isinstance(comp.data, dict):
if comp.data.get("type") != "lark_collapsible_panel_reasoning":
continue
reasoning_content = str(comp.data.get("content", "")).strip()
if not reasoning_content:
continue
elements.append(
{
"tag": "collapsible_panel",
"expanded": bool(comp.data.get("expanded", False)),
"background_color": "grey",
"padding": "8px 8px 8px 8px",
"margin": "4px 0px 4px 0px",
"border": {
"color": "grey",
"corner_radius": "6px",
},
"header": {
"title": {
"tag": "plain_text",
"content": str(
comp.data.get("title", "💭 Thinking"),
),
},
"background_color": "grey",
},
"elements": [
{
"tag": "markdown",
"content": reasoning_content,
}
],
}
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There is significant code duplication between _build_reasoning_collapsible_panel and _build_reasoning_card when creating the collapsible_panel JSON structure. This makes the code harder to maintain, as any change to the panel structure needs to be applied in two places.

To improve this, you could extract the common logic for building a panel element into a new helper method. For example:

@staticmethod
def _build_collapsible_panel_element(content: str, title: str, expanded: bool) -> dict:
    return {
        "tag": "collapsible_panel",
        "expanded": expanded,
        "background_color": "grey",
        "padding": "8px 8px 8px 8px",
        "margin": "4px 0px 4px 0px",
        "border": {
            "color": "grey",
            "corner_radius": "6px",
        },
        "header": {
            "title": {
                "tag": "plain_text",
                "content": title,
            },
            "background_color": "grey",
        },
        "elements": [
            {"tag": "markdown", "content": content},
        ],
    }

Then, both _build_reasoning_collapsible_panel and _build_reasoning_card can be simplified by calling this new method. This will make the code cleaner and easier to manage.

@dosubot dosubot bot added the size:L This PR changes 100-499 lines, ignoring generated files. label Mar 23, 2026
@Soulter Soulter merged commit 3c6cd22 into master Mar 23, 2026
5 of 6 checks passed
@Soulter Soulter deleted the feat/lark-collapsible-card-reasoning branch March 23, 2026 08:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

size:L This PR changes 100-499 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant