Skip to content

Conversation

@klingonaston
Copy link
Collaborator

@klingonaston klingonaston commented Nov 20, 2025

This PR introduces a new module for managing and dynamically filling prompt templates used in NLG report generation.

Changes

  • Created backend/app/services/nlg/prompt_templates.py to house all prompt templates.
  • Implemented templates for various report sections: tokenomics, onchain metrics, sentiment, team analysis, documentation, code audit, and risk factors.
  • Templates are stored as Python dictionaries, keyed by section ID, and include {data} placeholders.
  • Added utility functions to prompt_templates.py for dynamically filling these templates with provided data.
  • This change centralizes prompt management and improves the flexibility of report generation.

Summary by CodeRabbit

  • New Features

    • Added templated report generation for multiple analysis sections (tokenomics, on‑chain metrics, sentiment, team, documentation, code audit, risk).
  • Bug Fixes / Reliability

    • Standardized agent timeout and error handling with consistent result/status shapes for clearer failure reporting.
  • Tests

    • Updated test suites to reflect new data shapes and agent behaviors.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link

coderabbitai bot commented Nov 20, 2025

Walkthrough

Adds a new NLG prompt templates module and utilities, standardizes per-agent status/timeout/error shaping in the orchestrator, updates agent function signatures and tests to new return shapes and parameters, and adjusts integration test mocks to patch orchestrator-level fetch functions.

Changes

Cohort / File(s) Summary
NLG Prompt Templates
backend/app/services/nlg/prompt_templates.py
New module providing get_template(section_id: str) to retrieve section prompt templates (tokenomics, onchain_metrics, sentiment, team_analysis, documentation, code_audit, risk_factors) and fill_template(template: str, **kwargs) to populate templates. Unknown IDs return a default message.
Orchestrator: status & timeout handling
backend/app/core/orchestrator.py
Adds per-agent timeout handling, explicit timeout/error result paths, logs agent lifecycle events, and standardizes returned result shapes to include status (e.g., {"status":"completed","data":...} or {"status":"failed","error":...}).
Agent interface & tests updates
backend/app/services/agents/tests/test_code_audit_agent.py, backend/app/services/agents/tests/test_onchain_agent.py
Tests updated to reflect agent API changes: audit_codebase(...)fetch_data(...) returning dicts (code audit), and onchain functions now accept an additional token_id parameter; test assertions adapted to dict shapes and new parameters.
Integration test mocks updated
backend/tests/test_orchestrator_integration.py
Tests changed to patch orchestrator-level fetch functions (backend.app.core.orchestrator.fetch_onchain_metrics, ...fetch_tokenomics) instead of agent module targets; mock types adjusted where needed.

Sequence Diagram(s)

sequenceDiagram
  participant Orchestrator
  participant Agent
  Note over Orchestrator: For each registered agent
  Orchestrator->>Agent: invoke fetch_* (with timeout)
  alt Agent completes within timeout
    Agent-->>Orchestrator: return result (dict or value)
    Orchestrator->>Orchestrator: if result has "status" use as-is\nelse wrap -> {"status":"completed","data":result}
  else Agent times out
    Agent--xOrchestrator: no response
    Orchestrator-->>Orchestrator: create {"status":"failed","error":"timeout"}
  end
  alt Agent raises exception
    Agent--xOrchestrator: exception
    Orchestrator-->>Orchestrator: create {"status":"failed","error":exception_message}
  end
  Orchestrator-->>Orchestrator: aggregate per-agent statuses for final result
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

  • Pay attention to orchestrator changes around timeout implementation and aggregation of status shapes.
  • Verify consistency of standardized result shapes across all agent callers and tests.
  • Check agent signature updates (token_id) and downstream call sites.
  • Validate new template placeholders and fill behavior for edge cases (missing kwargs).

Suggested reviewers

  • felixjordandev

Poem

🐰
A rabbit types a tidy line,
Templates neat, and statuses fine,
Timeouts stamped, results now told,
Tests adjusted, mocks controlled—
Hop, deploy; the prompts unfold.

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 28.57% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title specifically references 'NLG prompt templates for report generation,' which directly aligns with the primary change: adding the new prompt_templates.py module. However, the changeset includes substantial modifications beyond the NLG module, including orchestrator refactoring, agent timeout handling, status-aware result wrapping, and multiple test updates. The title captures only one aspect of the work.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/nlg-prompt-templates

📜 Recent review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 11c4828 and 4c17f72.

⛔ Files ignored due to path filters (14)
  • backend/app/api/v1/__pycache__/routes.cpython-313.pyc is excluded by !**/*.pyc
  • backend/app/core/__pycache__/orchestrator.cpython-313.pyc is excluded by !**/*.pyc
  • backend/app/services/__pycache__/report_processor.cpython-313.pyc is excluded by !**/*.pyc
  • backend/app/services/agents/__pycache__/code_audit_agent.cpython-313.pyc is excluded by !**/*.pyc
  • backend/app/services/agents/__pycache__/onchain_agent.cpython-313.pyc is excluded by !**/*.pyc
  • backend/app/services/agents/__pycache__/social_sentiment_agent.cpython-313.pyc is excluded by !**/*.pyc
  • backend/app/services/agents/tests/__pycache__/test_code_audit_agent.cpython-313-pytest-8.4.2.pyc is excluded by !**/*.pyc
  • backend/app/services/agents/tests/__pycache__/test_onchain_agent.cpython-313-pytest-8.4.2.pyc is excluded by !**/*.pyc
  • backend/app/services/agents/tests/__pycache__/test_social_sentiment_agent.cpython-313-pytest-8.4.2.pyc is excluded by !**/*.pyc
  • backend/app/services/agents/tests/__pycache__/test_team_doc_agent.cpython-313-pytest-8.4.2.pyc is excluded by !**/*.pyc
  • backend/app/services/nlg/__pycache__/llm_client.cpython-313.pyc is excluded by !**/*.pyc
  • backend/app/services/nlg/tests/__pycache__/test_llm_client.cpython-313-pytest-8.4.2.pyc is excluded by !**/*.pyc
  • backend/logs/app.log is excluded by !**/*.log
  • backend/tests/__pycache__/test_orchestrator_integration.cpython-313-pytest-8.4.2.pyc is excluded by !**/*.pyc
📒 Files selected for processing (5)
  • backend/app/core/orchestrator.py (5 hunks)
  • backend/app/services/agents/tests/test_code_audit_agent.py (1 hunks)
  • backend/app/services/agents/tests/test_onchain_agent.py (8 hunks)
  • backend/app/services/nlg/prompt_templates.py (1 hunks)
  • backend/tests/test_orchestrator_integration.py (4 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • backend/app/services/nlg/prompt_templates.py
🧰 Additional context used
🧬 Code graph analysis (2)
backend/app/services/agents/tests/test_onchain_agent.py (1)
backend/app/services/agents/onchain_agent.py (1)
  • fetch_tokenomics (86-129)
backend/app/services/agents/tests/test_code_audit_agent.py (1)
backend/app/services/agents/code_audit_agent.py (1)
  • fetch_data (298-332)
🪛 Ruff (0.14.5)
backend/app/core/orchestrator.py

206-206: Do not catch blind exception: Exception

(BLE001)


248-248: Do not catch blind exception: Exception

(BLE001)


296-296: Do not catch blind exception: Exception

(BLE001)

backend/app/services/agents/tests/test_onchain_agent.py

144-144: Possible hardcoded password assigned to argument: "token_id"

(S106)


164-164: Possible hardcoded password assigned to argument: "token_id"

(S106)


185-185: Possible hardcoded password assigned to argument: "token_id"

(S106)


237-237: Possible hardcoded password assigned to argument: "token_id"

(S106)


255-255: Possible hardcoded password assigned to argument: "token_id"

(S106)


300-300: Possible hardcoded password assigned to argument: "token_id"

(S106)


349-349: Possible hardcoded password assigned to argument: "token_id"

(S106)


384-384: Possible hardcoded password assigned to argument: "token_id"

(S106)

🔇 Additional comments (3)
backend/app/services/agents/tests/test_code_audit_agent.py (1)

111-118: Adaptation to dict-based fetch_data result looks correct

The test now aligns with fetch_data returning CodeAuditResult.model_dump(): it checks the top-level keys, nested repo_url, and audit summary contents appropriately. No issues spotted.

backend/app/services/agents/tests/test_onchain_agent.py (1)

144-186: Tests correctly updated to pass token_id into fetch_tokenomics

All updated call sites now supply a token_id, which matches the new public API and ensures logs/traceability behave as expected. The static-analysis S106 warnings about “hardcoded password” on these literals are safe to ignore in this test context since these are non-sensitive dummy values.

Also applies to: 237-255, 300-301, 349-350, 384-385

backend/tests/test_orchestrator_integration.py (1)

37-45: Patching orchestrator-level fetch functions is appropriate

Switching the patches to backend.app.core.orchestrator.fetch_onchain_metrics / fetch_tokenomics correctly targets the symbols actually used inside onchain_data_agent closures, keeping these as true integration-style tests of the orchestrator wiring.

Also applies to: 91-99, 129-137


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
backend/app/services/nlg/prompt_templates.py (1)

10-70: Move templates dictionary to module-level constant for better performance.

The templates dictionary is recreated on every call to get_template(), which is inefficient. Since the templates are static, consider moving this to a module-level constant.

Apply this refactor:

+# Prompt templates for various report sections
+PROMPT_TEMPLATES = {
+    "tokenomics": """
+    Analyze the following tokenomics data and provide a comprehensive summary,
+    highlighting key aspects such as token distribution, vesting schedules,
+    inflation/deflation mechanisms, and any potential risks or advantages.
+    Focus on how these factors impact the long-term value and stability of the token.
+
+    Tokenomics Data:
+    {data}
+    """,
+    "onchain_metrics": """
+    Examine the provided on-chain metrics and generate an insightful analysis.
+    Cover aspects like active addresses, transaction volume, whale activity,
+    and network growth. Explain the implications of these metrics for the
+    project's health and adoption.
+
+    On-chain Metrics Data:
+    {data}
+    """,
+    "sentiment": """
+    Review the social sentiment data and summarize the overall market perception
+    of the project. Identify key themes, positive or negative trends, and
+    any significant events influencing sentiment. Discuss the potential impact
+    of this sentiment on the project's future.
+
+    Sentiment Data:
+    {data}
+    """,
+    "team_analysis": """
+    Analyze the team's background, experience, and contributions based on the
+    provided data. Assess the team's capability to execute the project roadmap
+    and highlight any strengths or weaknesses.
+
+    Team Analysis Data:
+    {data}
+    """,
+    "documentation": """
+    Evaluate the quality and completeness of the project's documentation.
+    Identify areas of excellence and areas needing improvement. Discuss how
+    effective documentation contributes to user adoption and developer engagement.
+
+    Documentation Data:
+    {data}
+    """,
+    "code_audit": """
+    Summarize the findings from the code audit report. Highlight critical
+    vulnerabilities, security best practices followed, and overall code quality.
+    Explain the implications of these findings for the project's security and reliability.
+
+    Code Audit Data:
+    {data}
+    """,
+    "risk_factors": """
+    Based on the provided data, identify and elaborate on the key risk factors
+    associated with the project. Categorize risks (e.g., technical, market, regulatory)
+    and discuss their potential impact and mitigation strategies.
+
+    Risk Factors Data:
+    {data}
+    """
+}
+
 def get_template(section_id: str) -> str:
     """
     Retrieves a prompt template based on the section ID.
     """
-    templates = {
-        "tokenomics": """
-        Analyze the following tokenomics data and provide a comprehensive summary,
-        highlighting key aspects such as token distribution, vesting schedules,
-        inflation/deflation mechanisms, and any potential risks or advantages.
-        Focus on how these factors impact the long-term value and stability of the token.
-
-        Tokenomics Data:
-        {data}
-        """,
-        "onchain_metrics": """
-        Examine the provided on-chain metrics and generate an insightful analysis.
-        Cover aspects like active addresses, transaction volume, whale activity,
-        and network growth. Explain the implications of these metrics for the
-        project's health and adoption.
-
-        On-chain Metrics Data:
-        {data}
-        """,
-        "sentiment": """
-        Review the social sentiment data and summarize the overall market perception
-        of the project. Identify key themes, positive or negative trends, and
-        any significant events influencing sentiment. Discuss the potential impact
-        of this sentiment on the project's future.
-
-        Sentiment Data:
-        {data}
-        """,
-        "team_analysis": """
-        Analyze the team's background, experience, and contributions based on the
-        provided data. Assess the team's capability to execute the project roadmap
-        and highlight any strengths or weaknesses.
-
-        Team Analysis Data:
-        {data}
-        """,
-        "documentation": """
-        Evaluate the quality and completeness of the project's documentation.
-        Identify areas of excellence and areas needing improvement. Discuss how
-        effective documentation contributes to user adoption and developer engagement.
-
-        Documentation Data:
-        {data}
-        """,
-        "code_audit": """
-        Summarize the findings from the code audit report. Highlight critical
-        vulnerabilities, security best practices followed, and overall code quality.
-        Explain the implications of these findings for the project's security and reliability.
-
-        Code Audit Data:
-        {data}
-        """,
-        "risk_factors": """
-        Based on the provided data, identify and elaborate on the key risk factors
-        associated with the project. Categorize risks (e.g., technical, market, regulatory)
-        and discuss their potential impact and mitigation strategies.
-
-        Risk Factors Data:
-        {data}
-        """
-    }
-    return templates.get(section_id, "No template found for this section ID.")
+    return PROMPT_TEMPLATES.get(section_id, "No template found for this section ID.")
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 001c974 and 11c4828.

📒 Files selected for processing (1)
  • backend/app/services/nlg/prompt_templates.py (1 hunks)

@felixjordandev felixjordandev merged commit d83d40a into main Nov 20, 2025
1 check passed
@felixjordandev felixjordandev deleted the feat/nlg-prompt-templates branch November 20, 2025 11:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants