feat(github): enhance metrics tracking and AI analysis#4
Conversation
This commit introduces several improvements to the GitHub metrics tracking and AI analysis: 1. Add tracking for direct merges to main branch 2. Enhance AI analysis prompt for more comprehensive insights 3. Improve formatting of AI response by stripping XML tags 4. Update metrics models to include new direct merges metric These changes will provide more detailed and actionable insights into team performance and development practices.
This commit refactors the format_ai_response function to enhance the parsing and presentation of AI-generated analysis. It introduces a more structured approach to extracting and displaying different sections of the analysis, improving readability and maintainability.
PR Analysis
PR Feedback
How to useInstructions
|
|
|
||
| # Extract and display efficiency score and justification | ||
| # Extract performance evaluation sections | ||
| performance_sections = { |
There was a problem hiding this comment.
Consider using a dictionary to map section tags to their titles and content, which could simplify the extraction and rendering logic by iterating over the dictionary. [medium]
| section_match = re.search(f"<{section_tag}>(.*?)</{section_tag}>", response, re.DOTALL) | ||
| if section_match: | ||
| content = section_match.group(1).strip() | ||
| console.print( |
There was a problem hiding this comment.
To improve maintainability, consider extracting the logic for rendering panels into a separate function. This would reduce duplication and make the code easier to update in the future. [medium]
| # Track direct merges to main | ||
| direct_to_main = sum( | ||
| 1 for pr in merged_prs | ||
| if pr.base.ref == repo_metrics.default_branch |
There was a problem hiding this comment.
Consider adding a check to ensure that pr.base.ref and pr.head.ref are not None before comparing them, to prevent potential attribute errors. [important]
| # Create a panel for each section | ||
| # Extract metrics section first | ||
| metrics_match = re.search(r"<metrics_extraction>(.*?)</metrics_extraction>", response, re.DOTALL) | ||
| if metrics_match: |
There was a problem hiding this comment.
It might be beneficial to add logging for cases where expected sections are not found in the response, to aid in debugging and understanding the AI's output. [medium]
This commit applies consistent formatting to improve code readability in github_format_ai.py and github_metrics.py. It includes line breaks for long lines, proper indentation, and consistent spacing.
PR Type:
Enhancement
PR Description:
PR Main Files Walkthrough:
files:
src/wellcode_cli/github/github_format_ai.py: Refactored theformat_ai_responsefunction to improve parsing and presentation of AI-generated analysis. Introduced structured extraction and display of metrics and performance evaluation sections, including metrics extraction, overall efficiency, strengths, areas for improvement, and recommendations. Enhanced handling of efficiency score and justification.src/wellcode_cli/github/github_metrics.py: Added logic to track direct merges to the main branch, updating both repository and organization metrics with this new data point.src/wellcode_cli/github/models/metrics.py: UpdatedRepositoryMetricsandOrganizationMetricsclasses to include a new attributedirect_merges_to_mainfor tracking direct merges to the main branch.User Description:
Description
Related Issue
Fixes #
Type of Change
Testing
Checklist