-
Notifications
You must be signed in to change notification settings - Fork 0
feat: Add comprehensive tests for NLG generator functions #62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughThe pull request refactors tests in the NLG engine module by replacing HTTP mocking via respx with simpler LLMClient AsyncMock-based testing, and adds a new test module for ReportNLGEngine. The refactored tests use mocked LLMClient.generate_text to verify tokenomics, onchain, sentiment, and full report generation behaviors. A new test module provides comprehensive coverage for ReportNLGEngine code audit and team documentation generation. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes
Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
backend/app/services/nlg/tests/test_nlg_engine.py (1)
103-114: Reduce code duplication in prompt validation.The expected prompt construction manually rebuilds the dictionary instead of reusing the
raw_datavariable. This creates maintenance overhead if the test data changes.Apply this diff for consistency with other tests (e.g., lines 46-50):
# Validate prompt correctness expected_template = get_template("onchain_metrics") expected_prompt = fill_template( expected_template, - data=json.dumps({ - "active_addresses": 1000, - "holders": 500, - "transaction_flows": "high", - "liquidity": "good", - }, indent=2) + data=json.dumps(raw_data, indent=2) ) mock_llm_client.generate_text.assert_called_with(expected_prompt)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (2)
backend/app/services/nlg/tests/__pycache__/test_nlg_engine.cpython-313-pytest-8.4.2.pycis excluded by!**/*.pycbackend/app/services/nlg/tests/__pycache__/test_report_nlg_engine.cpython-313-pytest-8.4.2.pycis excluded by!**/*.pyc
📒 Files selected for processing (2)
backend/app/services/nlg/tests/test_nlg_engine.py(1 hunks)backend/app/services/nlg/tests/test_report_nlg_engine.py(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
backend/app/services/nlg/tests/test_report_nlg_engine.py (3)
backend/app/services/nlg/report_nlg_engine.py (2)
ReportNLGEngine(12-147)generate_team_documentation_text(113-147)backend/app/services/nlg/llm_client.py (2)
LLMClient(9-55)generate_text(30-55)backend/app/services/nlg/prompt_templates.py (2)
get_template(6-139)fill_template(141-146)
backend/app/services/nlg/tests/test_nlg_engine.py (2)
backend/app/services/nlg/llm_client.py (2)
LLMClient(9-55)generate_text(30-55)backend/app/services/nlg/prompt_templates.py (2)
get_template(6-139)fill_template(141-146)
🪛 Ruff (0.14.5)
backend/app/services/nlg/tests/test_nlg_engine.py
10-10: Unused method argument: raw_data
(ARG002)
🔇 Additional comments (3)
backend/app/services/nlg/tests/test_report_nlg_engine.py (1)
1-143: Excellent test coverage for ReportNLGEngine!This test module provides comprehensive coverage for the ReportNLGEngine's code audit and team documentation generation methods. The tests properly:
- Mock the LLMClient to isolate unit behavior
- Validate prompt correctness using
get_templateandfill_template- Cover success paths, missing data, empty LLM responses, and exception handling
- Follow consistent patterns across all test cases
The approach of validating prompts (lines 40-46, 102-108) is particularly valuable for ensuring stable LLM interactions.
backend/app/services/nlg/tests/test_nlg_engine.py (2)
1-26: Well-designed test refactoring with proper mocking!The refactoring from HTTP-based mocking to LLMClient AsyncMock significantly improves test isolation and clarity. The ConcreteNLGEngine provides a clean testing implementation, and the mock_llm_client fixture properly sets up the async context manager.
Note: The static analysis hint about unused
raw_dataparameter on line 10 is a false positive—the parameter is required to match the abstract method signature from the base class.
29-217: Comprehensive test coverage for NLG engine methods!The test suite thoroughly validates tokenomics, onchain, and sentiment generation across multiple scenarios:
- Success paths with prompt validation
- Missing/empty data handling
- Empty LLM responses triggering error messages
- Exception handling for LLM failures
The consistent test structure and explicit prompt validation (using
get_templateandfill_template) ensure reliable and maintainable tests.
|
Nice, the mock LLM responses in the tests for reliability look solid—this should catch edge cases well. |
This PR introduces new tests for the NLG generator functions to ensure their reliability and correctness.
Changes
backend/app/services/nlg/tests/for each generator function.Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.