-
Notifications
You must be signed in to change notification settings - Fork 0
Feature: Implement LLM-driven code audit summary generation #51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughThis pull request adds a new asynchronous method to the NLGEngine class that generates code audit summaries by combining code and audit data, constructing a prompt from a new template, and invoking the LLM client with comprehensive error handling and response validation. Changes
Sequence DiagramsequenceDiagram
participant Client
participant NLGEngine
participant PromptTemplates
participant LLMClient
participant Logger
Client->>NLGEngine: generate_code_audit_text(code_data, audit_data)
alt Both data inputs present
NLGEngine->>NLGEngine: Convert to JSON strings with indentation
NLGEngine->>PromptTemplates: get_template("code_audit_summary")
PromptTemplates-->>NLGEngine: Return template
NLGEngine->>NLGEngine: Fill template with combined data
NLGEngine->>LLMClient: invoke(prompt)
alt LLM Success
LLMClient-->>NLGEngine: Generated text
NLGEngine-->>Client: Return audit summary
else LLM Error or Empty
LLMClient-->>NLGEngine: Exception or empty response
NLGEngine->>Logger: Log error
NLGEngine-->>Client: Return failure message
end
else Data absent
NLGEngine->>Logger: Log warning
NLGEngine-->>Client: Return "Data not yet available" message
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Areas requiring attention:
Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
backend/app/services/nlg/nlg_engine.py (1)
130-163: Code audit summary generation is sound; consider deduping LLM call logicThe control flow looks correct: you short‑circuit when both inputs are empty, fill the template with
"N/A"for missing parts, treat empty LLM content as an error, and return a consistent JSON wrapper in all paths. This matches the existing section generators.You do, however, duplicate the LLM invocation / response‑parsing pattern from
_generate_section_with_llm. If you want to keep Ruff happy (TRY301/TRY003 on the explicitValueError) and reduce repetition, consider extracting a small helper like_generate_with_llm(prompt: str, section_id: str, error_msg: str)that encapsulates thegenerate_textcall,choicesextraction, empty‑content check, and exception handling, then use it both here and in_generate_section_with_llm. That would centralize error behavior and message text while keeping this method focused on combiningcode_dataandaudit_data.backend/app/services/nlg/tests/test_nlg_engine.py (1)
176-179: Remove duplicated assertions in sentiment LLM‑error testThese three assertions repeat the ones immediately above in the same test function and add no extra coverage. Dropping the duplicated block will keep the test concise without changing behavior.
- assert parsed_response["section_id"] == "social_sentiment" - assert "Failed to generate social sentiment summary due to an internal error." in parsed_response["text"] - assert respx_mock.calls.call_count == 1 -
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (3)
backend/app/services/nlg/__pycache__/nlg_engine.cpython-313.pycis excluded by!**/*.pycbackend/app/services/nlg/__pycache__/prompt_templates.cpython-313.pycis excluded by!**/*.pycbackend/app/services/nlg/tests/__pycache__/test_nlg_engine.cpython-313-pytest-8.4.2.pycis excluded by!**/*.pyc
📒 Files selected for processing (3)
backend/app/services/nlg/nlg_engine.py(1 hunks)backend/app/services/nlg/prompt_templates.py(1 hunks)backend/app/services/nlg/tests/test_nlg_engine.py(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
backend/app/services/nlg/nlg_engine.py (2)
backend/app/services/nlg/prompt_templates.py (2)
get_template(6-125)fill_template(127-132)backend/app/services/nlg/llm_client.py (2)
LLMClient(9-55)generate_text(30-55)
backend/app/services/nlg/tests/test_nlg_engine.py (1)
backend/app/services/nlg/nlg_engine.py (1)
generate_code_audit_text(130-163)
🪛 Ruff (0.14.5)
backend/app/services/nlg/nlg_engine.py
156-156: Abstract raise to an inner function
(TRY301)
156-156: Avoid specifying long messages outside the exception class
(TRY003)
🔇 Additional comments (2)
backend/app/services/nlg/prompt_templates.py (1)
70-90: Template wiring and content look correctThe
"code_audit_summary"template is consistent with the existing templates, uses{code_data}/{audit_data}placeholders that matchgenerate_code_audit_text, and clearly guides the model toward the desired structure (clarity, risks, activity, quality, missing info). No issues from a formatting or API‑usage standpoint.backend/app/services/nlg/tests/test_nlg_engine.py (1)
181-244: New code‑audit tests give good coverage of success and failure pathsThe new tests for
generate_code_audit_textexercise the main behaviors: successful generation, both inputs empty, upstream error response, and empty content from the model. They assert onsection_id, key text fragments, and call counts, mirroring the existing patterns for other sections. This should be sufficient to guard regressions in the new feature.
Overview: This PR introduces a new function to generate comprehensive code audit summaries using LLM prompts.
Changes
generate_code_audit_text(code_data, audit_data)to create audit summaries.code_audit_agent's ability to provide detailed insights.Summary by CodeRabbit
Release Notes
New Features
Tests
✏️ Tip: You can customize this high-level summary in your review settings.