Skip to content

Improve code review prompts and add prompt testing tools#9

Merged
kratos06 merged 1 commit intomasterfrom
feature/remote-repo-support
Apr 28, 2025
Merged

Improve code review prompts and add prompt testing tools#9
kratos06 merged 1 commit intomasterfrom
feature/remote-repo-support

Conversation

@kratos06
Copy link
Copy Markdown
Owner

@kratos06 kratos06 commented Apr 28, 2025

  1. Update code review prompts to better identify effective code changes
  2. Add detailed working hours estimation guidelines
  3. Create test_prompt.py for testing individual diff files
  4. Create batch_test_prompts.py for batch testing from GitLab
  5. Create test_codedog_prompt.py for testing with CodeDog prompts
  6. Add test cases with high and low score examples
  7. Add documentation for prompt testing tools

Summary by CodeRabbit

  • New Features

    • Introduced tools for automated code review prompt testing, including single-file and batch testing against code diffs or snippets.
    • Added support for evaluating code changes using multiple models and custom system prompts, with detailed scoring across readability, efficiency, security, structure, error handling, documentation, code style, and overall score.
    • Implemented working hours estimation and classification of effective vs. non-effective code changes.
  • Documentation

    • Added comprehensive guides and usage examples for code review prompt testing tools and test diff collections.
  • Test Data

    • Provided categorized test diffs (high, low, and mixed quality) for benchmarking code review prompts.
  • Bug Fixes

    • Improved error handling and logging in prompt evaluation and batch testing processes.
  • Refactor

    • Enhanced prompt structure, output formatting, and evaluation logic for more robust and detailed code review analysis.

1. Update code review prompts to better identify effective code changes
2. Add detailed working hours estimation guidelines
3. Create test_prompt.py for testing individual diff files
4. Create batch_test_prompts.py for batch testing from GitLab
5. Create test_codedog_prompt.py for testing with CodeDog prompts
6. Add test cases with high and low score examples
7. Add documentation for prompt testing tools
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 28, 2025

Caution

Review failed

The pull request is closed.

Walkthrough

This update introduces a comprehensive framework for evaluating code review prompts using large language models. It adds new command-line tools for testing prompts on single files or in batch mode across multiple GitLab merge requests, with support for custom system prompts and multiple models. The evaluation logic is refactored and expanded to use detailed, structured prompts that require explicit scoring, effective code change classification, and working hours estimation. Output formats are standardized for both JSON and Markdown, and robust error handling and parsing are implemented. Extensive documentation and categorized test diff files are included for prompt benchmarking and comparison.

Changes

File(s) Change Summary
batch_test_prompts.py, test_prompt.py, test_codedog_prompt.py Added new command-line tools for testing code review prompts on single files/diffs or in batch mode, supporting multiple models, custom prompts, output formats, and robust parsing.
codedog/utils/code_evaluator.py Refactored and expanded code review prompt logic: detailed English prompt, explicit evaluation dimensions, effective/non-effective code change classification, working hours estimation, improved error handling, and standardized output parsing.
custom_system_prompt.txt Added a new expert reviewer prompt template with explicit evaluation criteria and guidelines for effective code change classification and effort estimation.
prompt_testing_README.md Added documentation detailing setup, usage, and evaluation criteria for the prompt testing tools, including examples and prompt optimization tips.
test_diffs/README.md Added documentation describing the categorized test diff files, their usage for benchmarking, and evaluation criteria (in Chinese).
test_diffs/high_score/*.diff, test_diffs/low_score/*.diff, test_diffs/mixed_score/ Added categorized test diff files covering high-quality, low-quality, and mixed-quality code changes for benchmarking prompt performance.
test_diffs/high_score/css_optimization.diff, test_diffs/high_score/java_refactoring.diff, test_diffs/high_score/javascript_bug_fix.diff, test_diffs/high_score/python_feature_enhancement.diff, test_diffs/high_score/sql_query_optimization.diff Added detailed diffs representing high-quality code changes in CSS, Java, JavaScript, Python, and SQL.
test_diffs/low_score/cpp_readability_issues.diff, test_diffs/low_score/java_error_handling_issues.diff, test_diffs/low_score/javascript_performance_issues.diff, test_diffs/low_score/python_security_issues.diff, test_diffs/low_score/sql_structure_issues.diff Added diffs representing low-quality code changes with issues in readability, error handling, performance, security, and SQL structure.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant CLI_Tool as CLI Tool (test_prompt.py / batch_test_prompts.py)
    participant LLM as Language Model API
    participant Evaluator as Code Evaluator Logic
    participant Output as Output Formatter

    User->>CLI_Tool: Provide file/diff or GitLab config, select model/prompt
    CLI_Tool->>Evaluator: Prepare prompt, sanitize input
    Evaluator->>LLM: Send prompt and code/diff for evaluation
    LLM-->>Evaluator: Return structured review (scores, comments, lines, hours)
    Evaluator->>Output: Parse and format results (JSON/Markdown)
    Output-->>CLI_Tool: Save or print results
    CLI_Tool-->>User: Display output or write to file
Loading
sequenceDiagram
    participant User
    participant BatchTool as batch_test_prompts.py
    participant GitLab as GitLab API
    participant Evaluator as Code Evaluator
    participant LLM as Language Model API
    participant Output as Output Formatter

    User->>BatchTool: Start batch test with GitLab project info
    BatchTool->>GitLab: Fetch MR diffs (with filters)
    GitLab-->>BatchTool: Return list of MR diffs
    loop For each diff file
        BatchTool->>Evaluator: Prepare prompt for diff
        Evaluator->>LLM: Send prompt and diff
        LLM-->>Evaluator: Return review results
        Evaluator->>Output: Format and store results
    end
    BatchTool->>User: Output summary and comparison reports
Loading

Poem

In the warren, code reviews abound,
With prompts and models hopping all around.
Now rabbits batch-test, parse, and score,
With structured feedback, insights galore!
From high to low, each diff gets its say—
Hop forward, dear coder, it's review time today!

((\
( -.-)
o_(")(")


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e790a18 and 3101b72.

📒 Files selected for processing (17)
  • batch_test_prompts.py (1 hunks)
  • codedog/utils/code_evaluator.py (15 hunks)
  • custom_system_prompt.txt (1 hunks)
  • prompt_testing_README.md (1 hunks)
  • test_codedog_prompt.py (1 hunks)
  • test_diffs/README.md (1 hunks)
  • test_diffs/high_score/css_optimization.diff (1 hunks)
  • test_diffs/high_score/java_refactoring.diff (1 hunks)
  • test_diffs/high_score/javascript_bug_fix.diff (1 hunks)
  • test_diffs/high_score/python_feature_enhancement.diff (1 hunks)
  • test_diffs/high_score/sql_query_optimization.diff (1 hunks)
  • test_diffs/low_score/cpp_readability_issues.diff (1 hunks)
  • test_diffs/low_score/java_error_handling_issues.diff (1 hunks)
  • test_diffs/low_score/javascript_performance_issues.diff (1 hunks)
  • test_diffs/low_score/python_security_issues.diff (1 hunks)
  • test_diffs/low_score/sql_structure_issues.diff (1 hunks)
  • test_prompt.py (1 hunks)
✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@kratos06 kratos06 merged commit 4ca489c into master Apr 28, 2025
1 of 3 checks passed
@kratos06 kratos06 deleted the feature/remote-repo-support branch April 28, 2025 03:30
@github-actions
Copy link
Copy Markdown

Coverage

Coverage Report
FileStmtsMissCoverMissing
codedog
   analyze_code.py19190%6–74
   localization.py17288%17, 30
codedog/actors/reporters
   code_review.py1074360%48–92, 96–111, 116, 144, 146, 148, 150, 157
codedog/chains/code_review
   base.py643152%34, 42, 50, 57–72, 79–94, 100–111, 114–117, 122–125, 135
   translate_code_review_chain.py48480%1–103
codedog/chains/pr_summary
   base.py822076%54, 62, 70, 96–116, 133–136, 178–180, 194–201
   translate_pr_summary_chain.py603542%40–49, 61–69, 77–85, 91–96, 101–117, 120–130, 135–151
codedog/retrievers
   github_retriever.py1004159%79, 87, 94–95, 98–99, 102, 111–114, 130–134, 150–151, 154, 163–167, 172–179, 197, 202, 205–209, 217–222, 225–228, 231, 242, 252
   gitlab_retriever.py1197636%43–56, 62, 66, 70, 74, 78, 81–85, 88–92, 95, 104, 116, 124–141, 146–149, 155–163, 166–176, 179–203, 206–212, 216, 219–226, 234, 237–240, 243, 254–255, 258
codedog/templates
   optimized_code_review_prompt.py990%8–259
codedog/utils
   code_evaluator.py138213820%1–3566
   email_utils.py705916%31–49, 71–113, 134–158
   git_hooks.py45450%1–146
   git_log_analyzer.py1701700%1–448
   langchain_utils.py23718622%25–41, 64, 68, 78–152, 162–305, 315, 322–343, 350–371, 378–399, 405–416, 422–433, 455–485
   remote_repository_analyzer.py1051050%1–248
TOTAL3072227126% 

Tests Skipped Failures Errors Time
46 1 💤 1 ❌ 1 🔥 4.058s ⏱️

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant