Skip to content

Conversation

@haljet-chain
Copy link
Collaborator

@haljet-chain haljet-chain commented Nov 23, 2025

This PR introduces new tests for the NLG generator functions to ensure their reliability and correctness.

Changes

  • Added dedicated test files within backend/app/services/nlg/tests/ for each generator function.
  • Implemented tests using mocked LLM responses to simulate various scenarios.
  • Validated output formatting, proper handling of missing fields, and accurate template filling.
  • Included checks for prompt correctness to ensure stable and predictable LLM interactions.

Summary by CodeRabbit

  • Tests
    • Improved test infrastructure for NLG components with simplified mocking approach and expanded coverage for tokenomics, onchain, sentiment, and report generation functionality.
    • Added comprehensive new test suite for report generation with validation of error handling and edge cases.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link

coderabbitai bot commented Nov 23, 2025

Walkthrough

The pull request refactors tests in the NLG engine module by replacing HTTP mocking via respx with simpler LLMClient AsyncMock-based testing, and adds a new test module for ReportNLGEngine. The refactored tests use mocked LLMClient.generate_text to verify tokenomics, onchain, sentiment, and full report generation behaviors. A new test module provides comprehensive coverage for ReportNLGEngine code audit and team documentation generation.

Changes

Cohort / File(s) Summary
NLG Engine Tests Refactoring
backend/app/services/nlg/tests/test_nlg_engine.py
Replaced respx HTTP mocking with mocked LLMClient and AsyncMock. Updated tests for tokenomics, onchain, and sentiment behaviors to verify success, error handling, missing data, and empty content scenarios. Added pytest fixture mock_llm_client and updated nlg_engine fixture. Tests now assert on JSON outputs and prompt construction via template utilities instead of HTTP response interception.
New ReportNLGEngine Tests
backend/app/services/nlg/tests/test_report_nlg_engine.py
New test module for ReportNLGEngine covering code audit and team documentation generation. Tests verify successful generation, handling of missing input data, empty LLM responses, and LLM exceptions. Prompts are validated against template utilities, and LLMClient interactions are asserted.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

  • Consistent refactoring pattern across multiple test methods (respx → AsyncMock mocking), which simplifies review but requires verification across several test cases
  • New test module with comprehensive scenario coverage (8+ test cases) that each need validation
  • Mock fixture setup and assertion patterns should be verified for correctness
  • Verify that test coverage and assertions align with the refactored approach

Possibly related PRs

Suggested reviewers

  • felixjordandev

Poem

🐰 The mocks have changed, respx is gone,
AsyncMock now carries on,
Templates checked, assertions made,
New tests built, a grand cascade!
Reports audited, bugs now fade. ✨

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main change: adding comprehensive tests for NLG generator functions. It is specific, clear, and directly reflects the primary objective of the PR.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/add-nlg-generator-tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
backend/app/services/nlg/tests/test_nlg_engine.py (1)

103-114: Reduce code duplication in prompt validation.

The expected prompt construction manually rebuilds the dictionary instead of reusing the raw_data variable. This creates maintenance overhead if the test data changes.

Apply this diff for consistency with other tests (e.g., lines 46-50):

     # Validate prompt correctness
     expected_template = get_template("onchain_metrics")
     expected_prompt = fill_template(
         expected_template,
-        data=json.dumps({
-            "active_addresses": 1000,
-            "holders": 500,
-            "transaction_flows": "high",
-            "liquidity": "good",
-        }, indent=2)
+        data=json.dumps(raw_data, indent=2)
     )
     mock_llm_client.generate_text.assert_called_with(expected_prompt)
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 110a53e and 77a1cea.

⛔ Files ignored due to path filters (2)
  • backend/app/services/nlg/tests/__pycache__/test_nlg_engine.cpython-313-pytest-8.4.2.pyc is excluded by !**/*.pyc
  • backend/app/services/nlg/tests/__pycache__/test_report_nlg_engine.cpython-313-pytest-8.4.2.pyc is excluded by !**/*.pyc
📒 Files selected for processing (2)
  • backend/app/services/nlg/tests/test_nlg_engine.py (1 hunks)
  • backend/app/services/nlg/tests/test_report_nlg_engine.py (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
backend/app/services/nlg/tests/test_report_nlg_engine.py (3)
backend/app/services/nlg/report_nlg_engine.py (2)
  • ReportNLGEngine (12-147)
  • generate_team_documentation_text (113-147)
backend/app/services/nlg/llm_client.py (2)
  • LLMClient (9-55)
  • generate_text (30-55)
backend/app/services/nlg/prompt_templates.py (2)
  • get_template (6-139)
  • fill_template (141-146)
backend/app/services/nlg/tests/test_nlg_engine.py (2)
backend/app/services/nlg/llm_client.py (2)
  • LLMClient (9-55)
  • generate_text (30-55)
backend/app/services/nlg/prompt_templates.py (2)
  • get_template (6-139)
  • fill_template (141-146)
🪛 Ruff (0.14.5)
backend/app/services/nlg/tests/test_nlg_engine.py

10-10: Unused method argument: raw_data

(ARG002)

🔇 Additional comments (3)
backend/app/services/nlg/tests/test_report_nlg_engine.py (1)

1-143: Excellent test coverage for ReportNLGEngine!

This test module provides comprehensive coverage for the ReportNLGEngine's code audit and team documentation generation methods. The tests properly:

  • Mock the LLMClient to isolate unit behavior
  • Validate prompt correctness using get_template and fill_template
  • Cover success paths, missing data, empty LLM responses, and exception handling
  • Follow consistent patterns across all test cases

The approach of validating prompts (lines 40-46, 102-108) is particularly valuable for ensuring stable LLM interactions.

backend/app/services/nlg/tests/test_nlg_engine.py (2)

1-26: Well-designed test refactoring with proper mocking!

The refactoring from HTTP-based mocking to LLMClient AsyncMock significantly improves test isolation and clarity. The ConcreteNLGEngine provides a clean testing implementation, and the mock_llm_client fixture properly sets up the async context manager.

Note: The static analysis hint about unused raw_data parameter on line 10 is a false positive—the parameter is required to match the abstract method signature from the base class.


29-217: Comprehensive test coverage for NLG engine methods!

The test suite thoroughly validates tokenomics, onchain, and sentiment generation across multiple scenarios:

  • Success paths with prompt validation
  • Missing/empty data handling
  • Empty LLM responses triggering error messages
  • Exception handling for LLM failures

The consistent test structure and explicit prompt validation (using get_template and fill_template) ensure reliable and maintainable tests.

@felixjordandev
Copy link
Collaborator

Nice, the mock LLM responses in the tests for reliability look solid—this should catch edge cases well.

@felixjordandev felixjordandev merged commit ffb0a2e into main Nov 23, 2025
1 check passed
@felixjordandev felixjordandev deleted the feat/add-nlg-generator-tests branch November 23, 2025 16:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants