-
Notifications
You must be signed in to change notification settings - Fork 0
Feat: Add tokenomics text generation to NLG engine #47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughA new asynchronous method Changes
Sequence DiagramsequenceDiagram
participant Caller
participant NLGEngine
participant TemplateService
participant LLMClient
Caller->>NLGEngine: generate_tokenomics_text(raw_data)
Note over NLGEngine: Validate raw_data
NLGEngine->>TemplateService: get_template("tokenomics")
TemplateService-->>NLGEngine: template
NLGEngine->>NLGEngine: fill_template(template, raw_data)
NLGEngine->>LLMClient: invoke with prompt
alt LLM Success
LLMClient-->>NLGEngine: generated_text
NLGEngine->>NLGEngine: _format_output(generated_text)
else LLM Error or Empty Response
NLGEngine->>NLGEngine: apply fallback
NLGEngine->>NLGEngine: log error
end
NLGEngine-->>Caller: JSON-formatted result
Estimated code review effort🎯 2 (Simple) | ⏱️ ~15 minutes
Possibly related PRs
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Tip 📝 Customizable high-level summaries are now available in beta!You can now customize how CodeRabbit generates the high-level summary in your pull requests — including its content, structure, tone, and formatting.
Example instruction:
Note: This feature is currently in beta for Pro-tier users, and pricing will be announced later. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
backend/app/services/nlg/nlg_engine.py (2)
3-6: Imports and typing look fine; optional consistency tweakThe new imports for
Dict,Any,LLMClient, and the prompt helpers are correct and align with the new method’s usage. For future cleanup, you might consider aligning type hints in this class (e.g., usingDict[str, Any]ordict[str, Any]consistently acrossgenerate_section_text,generate_full_report, andgenerate_tokenomics_text), but it’s not blocking.
51-79: Tokenomics generation flow is solid; consider tightening error handling & loggingThe overall flow looks good: you handle missing
raw_data, build a section-specific prompt from the tokenomics template, call the LLM viaLLMClientas an async context manager, and always return a JSON-structured payload via_format_output. Fallback text for both missing data and LLM failure is clear and user-safe.Two non-blocking improvements to consider:
Use structured logging instead of
Using-import json +import json +import logging + +logger = logging.getLogger(__name__) ... async def generate_tokenomics_text(self, raw_data: Dict[str, Any]) -> str: ... except Exception as e:
# Log the exception for debugging purposesprint(f"Error generating tokenomics text with LLM: {e}")
# Log the exception for debugging purposeslogger.exception("Error generating tokenomics text with LLM") return self._format_output({ "section_id": "tokenomics", "text": "Failed to generate tokenomics summary due to an internal error. Please try again later." })
- Avoid catching completely generic
Exception
As Ruff pointed out (BLE001), catching bareExceptioncan hide programming errors. If feasible, narrow this to the expected failure modes (e.g.,ValueErrorfrom empty content, plus whatever yourLLMClientraises), or keep the broad catch but re-raise unexpected exceptions after logging to avoid silently masking bugs.These are quality-of-life improvements; the current implementation is functionally acceptable.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
backend/app/services/nlg/nlg_engine.py(2 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
backend/app/services/nlg/nlg_engine.py (2)
backend/app/services/nlg/llm_client.py (2)
LLMClient(9-55)generate_text(30-55)backend/app/services/nlg/prompt_templates.py (2)
get_template(6-71)fill_template(73-78)
🪛 Ruff (0.14.5)
backend/app/services/nlg/nlg_engine.py
70-70: Abstract raise to an inner function
(TRY301)
70-70: Avoid specifying long messages outside the exception class
(TRY003)
72-72: Do not catch blind exception: Exception
(BLE001)
|
the fallback mechanism for missing |
Overview: This PR introduces a new function to generate tokenomics-related text for report summaries.
Changes
generate_tokenomics_text(raw_data)withinbackend/app/services/nlg/nlg_engine.py.raw_data.Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.