-
Notifications
You must be signed in to change notification settings - Fork 15
Adding unit tests #17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR adds comprehensive unit tests to the guardrails codebase, increasing test coverage to 82%. The tests focus on core functionality including client operations, streaming, utility functions, and individual guardrail checks, while temporarily ignoring the eval folder.
Key changes:
- Added unit tests for client sync/async operations, streaming functionality, and base client helpers
- Implemented tests for utility modules (schema, parsing, output, context)
- Created comprehensive test coverage for individual guardrail checks (URLs, secret keys, keywords, etc.)
- Added shared test fixtures and configuration for consistent test environments
Reviewed Changes
Copilot reviewed 24 out of 24 changed files in this pull request and generated 6 comments.
Show a summary per file
| File | Description |
|---|---|
| tests/unit/utils/* | Tests for utility functions including schema validation, parsing, and output formatting |
| tests/unit/test_streaming.py | Tests for streaming functionality and guardrail integration |
| tests/unit/test_client_*.py | Comprehensive tests for synchronous and asynchronous client operations |
| tests/unit/test_resources_*.py | Tests for chat and response resource wrappers |
| tests/unit/test_*.py | Tests for core modules like registry, context, CLI, and base client |
| tests/unit/checks/* | Tests for individual guardrail implementations |
| tests/conftest.py | Shared pytest fixtures with OpenAI client stubs |
| pyproject.toml | Updated configuration for test coverage and exclusions |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
| for r in response.guardrail_results.all_results | ||
| if r.tripwire_triggered | ||
| ), | ||
| (r for r in response.guardrail_results.all_results if r.tripwire_triggered), |
Copilot
AI
Oct 8, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nitpick] The code formatting changes appear to be purely cosmetic line length adjustments that don't improve readability. The original multi-line format was more readable, especially for the complex generator expressions and string concatenations.
| (r for r in response.guardrail_results.all_results if r.tripwire_triggered), | |
| ( | |
| r | |
| for r in response.guardrail_results.all_results | |
| if r.tripwire_triggered | |
| ), |
| for r in response.guardrail_results.all_results | ||
| if r.tripwire_triggered | ||
| ), | ||
| (r for r in response.guardrail_results.all_results if r.tripwire_triggered), |
Copilot
AI
Oct 8, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nitpick] The code formatting changes appear to be purely cosmetic line length adjustments that don't improve readability. The original multi-line format was more readable, especially for the complex generator expressions and string concatenations.
| (r for r in response.guardrail_results.all_results if r.tripwire_triggered), | |
| ( | |
| r | |
| for r in response.guardrail_results.all_results | |
| if r.tripwire_triggered | |
| ), |
| ) | ||
| passing_fails = sum(1 for c in outcome["passing_cases"] if c["status"] == "FAIL") | ||
| failing_fails = sum(1 for c in outcome["failing_cases"] if c["status"] == "FAIL") | ||
| errors = sum(1 for c in outcome["passing_cases"] + outcome["failing_cases"] if c["status"] == "ERROR") |
Copilot
AI
Oct 8, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nitpick] The code formatting changes appear to be purely cosmetic line length adjustments that don't improve readability. The original multi-line format was more readable, especially for the complex generator expressions and string concatenations.
| errors = sum(1 for c in outcome["passing_cases"] + outcome["failing_cases"] if c["status"] == "ERROR") | |
| errors = sum( | |
| 1 | |
| for c in outcome["passing_cases"] + outcome["failing_cases"] | |
| if c["status"] == "ERROR" | |
| ) |
| passed_cases = sum(1 for c in outcome["passing_cases"] + outcome["failing_cases"] if c["status"] == "PASS") | ||
| failed_cases = sum(1 for c in outcome["passing_cases"] + outcome["failing_cases"] if c["status"] == "FAIL") |
Copilot
AI
Oct 8, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nitpick] The code formatting changes appear to be purely cosmetic line length adjustments that don't improve readability. The original multi-line format was more readable, especially for the complex generator expressions and string concatenations.
| passed_cases = sum(1 for c in outcome["passing_cases"] + outcome["failing_cases"] if c["status"] == "PASS") | |
| failed_cases = sum(1 for c in outcome["passing_cases"] + outcome["failing_cases"] if c["status"] == "FAIL") | |
| passed_cases = sum( | |
| 1 | |
| for c in outcome["passing_cases"] + outcome["failing_cases"] | |
| if c["status"] == "PASS" | |
| ) | |
| failed_cases = sum( | |
| 1 | |
| for c in outcome["passing_cases"] + outcome["failing_cases"] | |
| if c["status"] == "FAIL" | |
| ) |
| f"Tests: {summary['passed_tests']} passed, " | ||
| f"{summary['failed_tests']} failed, " | ||
| f"{summary['error_tests']} errors", | ||
| f"Tests: {summary['passed_tests']} passed, " f"{summary['failed_tests']} failed, " f"{summary['error_tests']} errors", |
Copilot
AI
Oct 8, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nitpick] The code formatting changes appear to be purely cosmetic line length adjustments that don't improve readability. The original multi-line format was more readable, especially for the complex generator expressions and string concatenations.
| f"Tests: {summary['passed_tests']} passed, " f"{summary['failed_tests']} failed, " f"{summary['error_tests']} errors", | |
| f"Tests: {summary['passed_tests']} passed, " | |
| f"{summary['failed_tests']} failed, " | |
| f"{summary['error_tests']} errors", |
| if hasattr(model, "model_fields") | ||
| else getattr(model, "__fields__", {}) | ||
| ) | ||
| fields = model.model_fields if hasattr(model, "model_fields") else getattr(model, "__fields__", {}) |
Copilot
AI
Oct 8, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nitpick] The reformatted line reduces readability compared to the original multi-line format. The original format with parentheses and line breaks was clearer for this complex conditional expression.
| fields = model.model_fields if hasattr(model, "model_fields") else getattr(model, "__fields__", {}) | |
| fields = ( | |
| model.model_fields | |
| if hasattr(model, "model_fields") | |
| else getattr(model, "__fields__", {}) | |
| ) |
|
All Copilot comments are regarding |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TY
evalfolder for now