Skip to content

Conversation

@klingonaston
Copy link
Collaborator

@klingonaston klingonaston commented Nov 8, 2025

Overview: Developed comprehensive unit tests for the CodeAuditAgent to ensure robust functionality and reliable metric extraction.

Changes

  • Mocked GitHub/GitLab API responses to test fetch_repo_metrics and analyze_code_activity.
  • Verified correct metric extraction and JSON output format.
  • Included tests for audit report retrieval.
  • Added error handling tests for missing or inaccessible audit data.
  • The main changes are in backend/app/services/agents/code_audit_agent.py and its new test file.

Summary by CodeRabbit

  • Tests
    • Expanded error-path coverage for repository-metrics fetching: added tests for HTTP 404s, network errors, and unsupported/invalid repo URLs across providers.
    • Adjusted test fixtures to simulate empty API results and preserved logging suppression during tests.
    • Minor test content rearrangements to align with new error-path scenarios.

@coderabbitai
Copy link

coderabbitai bot commented Nov 8, 2025

Walkthrough

Adds error-path unit tests for fetch_repo_metrics in the CodeAuditAgent: covers HTTP 404s, network request errors, unsupported hosts, and invalid GitHub/GitLab URL formats, asserting zeroed metrics and "N/A" releases.

Changes

Cohort / File(s) Summary
Error-handling test coverage
backend/app/services/agents/tests/test_code_audit_agent.py
Added tests for HTTP 404 responses and network RequestError scenarios for GitHub and GitLab endpoints; added tests for unsupported (Bitbucket) and invalid URL formats; updated imports to include HTTPStatusError and RequestError; adjusted mock responses (empty lists); kept logging suppression.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

  • Verify mock setups correctly simulate 404 and RequestError for both GitHub and GitLab.
  • Confirm assertions expect zeroed metrics and "N/A" releases in each failure scenario.
  • Check logging suppression doesn't hide useful test diagnostics.

Possibly related PRs

Suggested reviewers

  • felixjordandev

Poem

🐇 I hopped through mocks at break of day,

404s and timeouts kept bugs at bay.
Empty lists, "N/A" in sight,
Tests now catch the missing light.
Coverage blooms — a rabbit's delight 🌿

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main change: adding comprehensive unit tests for the Code Audit Agent, which aligns with the file modification summary and PR objectives.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/code-audit-agent-tests

📜 Recent review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b1664cb and ab10507.

⛔ Files ignored due to path filters (1)
  • backend/app/services/agents/tests/__pycache__/test_code_audit_agent.cpython-313-pytest-8.4.2.pyc is excluded by !**/*.pyc
📒 Files selected for processing (1)
  • backend/app/services/agents/tests/test_code_audit_agent.py (3 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • backend/app/services/agents/tests/test_code_audit_agent.py

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
backend/app/services/agents/tests/test_code_audit_agent.py (1)

106-113: Add missing json parameter to Response mocks.

Lines 108, 109, and 112 are missing the json parameter in the Response mocks, while the similar test at lines 24-31 correctly includes json=[]. The implementation calls .json() as a fallback in the parse_link_header helper, which could fail without valid JSON content.

Apply this diff to add the missing parameters:

         # Mock GitHub API calls for fetch_repo_metrics
-        respx.get(f"https://api.github.com/repos/{owner}/{repo}/commits?per_page=1").mock(return_value=Response(200, headers={'link': '<https://api.github.com/repositories/1296269/commits?per_page=1&page=2>; rel="next", <https://api.github.com/repositories/1296269/commits?per_page=1&page=10>; rel="last"'}))
-        respx.get(f"https://api.github.com/repos/{owner}/{repo}/contributors?per_page=1").mock(return_value=Response(200, headers={'link': '<https://api.github.com/repositories/1296269/contributors?per_page=1&page=2>; rel="next", <https://api.github.com/repositories/1296269/contributors?per_page=1&page=5>; rel="last"'}))
+        respx.get(f"https://api.github.com/repos/{owner}/{repo}/commits?per_page=1").mock(return_value=Response(200, headers={'link': '<https://api.github.com/repositories/1296269/commits?per_page=1&page=2>; rel="next", <https://api.github.com/repositories/1296269/commits?per_page=1&page=10>; rel="last"'}, json=[]))
+        respx.get(f"https://api.github.com/repos/{owner}/{repo}/contributors?per_page=1").mock(return_value=Response(200, headers={'link': '<https://api.github.com/repositories/1296269/contributors?per_page=1&page=2>; rel="next", <https://api.github.com/repositories/1296269/contributors?per_page=1&page=5>; rel="last"'}, json=[]))
         respx.get(f"https://api.github.com/repos/{owner}/{repo}/releases/latest").mock(return_value=Response(200, json={'tag_name': 'v1.0.0'}))
         respx.get(f"https://api.github.com/search/issues?q=repo%3A{owner}%2F{repo}%2Btype%3Aissue&per_page=1").mock(return_value=Response(200, json={'total_count': 20}))
-        respx.get(f"https://api.github.com/repos/{owner}/{repo}/pulls?state=all&per_page=1").mock(return_value=Response(200, headers={'link': '<https://api.github.com/repositories/1296269/pulls?state=all&per_page=1&page=2>; rel="next", <https://api.github.com/repositories/1296269/pulls?state=all&per_page=1&page=15>; rel="last"'}))
+        respx.get(f"https://api.github.com/repos/{owner}/{repo}/pulls?state=all&per_page=1").mock(return_value=Response(200, headers={'link': '<https://api.github.com/repositories/1296269/pulls?state=all&per_page=1&page=2>; rel="next", <https://api.github.com/repositories/1296269/pulls?state=all&per_page=1&page=15>; rel="last"'}, json=[]))
♻️ Duplicate comments (1)
backend/app/services/agents/tests/test_code_audit_agent.py (1)

196-200: Minor: Request object uses incorrect URL.

Same issue as the GitHub network error test - the Request objects use repo_url instead of actual API endpoint URLs. This doesn't affect functionality but isn't fully realistic.

🧹 Nitpick comments (3)
backend/app/services/agents/tests/test_code_audit_agent.py (3)

6-9: Consider using pytest's caplog fixture instead of module-level logging configuration.

Module-level basicConfig can interfere with logging configuration in other test modules and makes tests less isolated. pytest's caplog fixture provides better test isolation.

Apply this diff to use caplog instead:

-import logging
-
-# Suppress logging during tests to avoid clutter
-logging.basicConfig(level=logging.CRITICAL)

Then in each test function that needs logging suppression:

async def test_fetch_github_repo_metrics_http_error(code_audit_agent, caplog):
    with caplog.at_level(logging.CRITICAL):
        # test code

Alternatively, create a fixture to suppress logging:

@pytest.fixture(autouse=True)
def suppress_logging(caplog):
    caplog.set_level(logging.CRITICAL)

152-156: Minor: Request object uses incorrect URL.

The Request objects in the RequestError mocks use repo_url (e.g., "https://github.com/owner/repo") instead of the actual API endpoint URLs. While this doesn't affect test functionality since the request parameter isn't used in assertions, it's not fully realistic.

For better realism, consider using the actual endpoint URLs or omit the request parameter entirely:

-        respx.get(f"https://api.github.com/repos/{owner}/{repo}/commits?per_page=1").mock(side_effect=RequestError("Network error", request=Request("GET", repo_url)))
+        respx.get(f"https://api.github.com/repos/{owner}/{repo}/commits?per_page=1").mock(side_effect=RequestError("Network error", request=Request("GET", f"https://api.github.com/repos/{owner}/{repo}/commits?per_page=1")))

Or simply:

-        respx.get(f"https://api.github.com/repos/{owner}/{repo}/commits?per_page=1").mock(side_effect=RequestError("Network error", request=Request("GET", repo_url)))
+        respx.get(f"https://api.github.com/repos/{owner}/{repo}/commits?per_page=1").mock(side_effect=RequestError("Network error"))

121-245: Optional: Consider parametrizing error handling tests.

The error handling tests (lines 121-245) share significant structural similarity. While the current implementation is clear and maintainable, parametrization could reduce duplication.

Example parametrized approach:

@pytest.mark.parametrize("platform,base_url,error_type", [
    ("github", "https://api.github.com/repos/owner/repo", "http"),
    ("github", "https://api.github.com/repos/owner/repo", "network"),
    ("gitlab", "https://gitlab.com/api/v4/projects/owner%2Frepo", "http"),
    ("gitlab", "https://gitlab.com/api/v4/projects/owner%2Frepo", "network"),
])
@pytest.mark.asyncio
async def test_fetch_repo_metrics_errors(code_audit_agent, platform, base_url, error_type):
    # Common test logic
    pass

However, the current approach is perfectly acceptable and arguably more readable for error scenarios. This is purely a maintainability suggestion.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9e7084c and b1664cb.

⛔ Files ignored due to path filters (1)
  • backend/app/services/agents/tests/__pycache__/test_code_audit_agent.cpython-313-pytest-8.4.2.pyc is excluded by !**/*.pyc
📒 Files selected for processing (1)
  • backend/app/services/agents/tests/test_code_audit_agent.py (2 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
backend/app/services/agents/tests/test_code_audit_agent.py (1)
backend/app/services/agents/code_audit_agent.py (5)
  • CodeAuditAgent (44-290)
  • CodeMetrics (23-31)
  • AuditSummary (33-38)
  • CodeAuditResult (40-42)
  • fetch_repo_metrics (177-220)
🔇 Additional comments (5)
backend/app/services/agents/tests/test_code_audit_agent.py (5)

121-143: Well-structured error handling test.

This test correctly verifies that HTTP 404 errors result in graceful degradation to default/zeroed metrics rather than propagating exceptions. The mock setup comprehensively covers all GitHub API endpoints.


168-188: LGTM!

This test correctly verifies GitLab API 404 error handling with comprehensive endpoint coverage and appropriate assertions for default values.


212-222: Good coverage for unsupported repository platforms.

This test correctly verifies that unsupported URLs (Bitbucket in this case) gracefully return default metrics without attempting API calls.


224-234: Good coverage for malformed GitHub URLs.

This test verifies that invalid GitHub URL formats (missing repository name) are handled gracefully without API calls.


236-245: Good coverage for malformed GitLab URLs.

This test verifies that invalid GitLab URL formats are handled gracefully, maintaining consistency with the GitHub invalid format test.

@felixjordandev felixjordandev merged commit e91d1b6 into main Nov 8, 2025
1 check passed
@felixjordandev felixjordandev deleted the feat/code-audit-agent-tests branch November 8, 2025 13:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants