-
Notifications
You must be signed in to change notification settings - Fork 16
Handle sync guardrail calls to avoid awaitable error #21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR fixes a bug where using synchronous OpenAI clients caused TypeError: object ChatCompletion can't be used in 'await' expression errors in LLM-based guardrails. The root cause was that guardrail LLMs were initialized as the same class as the actual client, but guardrails always execute asynchronously. The fix wraps synchronous methods in a background executor to allow uniform async execution.
Key changes:
- Added
_invoke_openai_callablehelper to handle both sync and async OpenAI SDK methods - Updated all LLM-based checks to use the new helper for calling OpenAI APIs
- Added unit tests to verify sync client support
Reviewed Changes
Copilot reviewed 5 out of 5 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
src/guardrails/checks/text/llm_base.py |
Implements _invoke_openai_callable and _request_chat_completion helpers; updates run_llm to support sync/async clients |
src/guardrails/checks/text/prompt_injection_detection.py |
Updates _call_prompt_injection_detection_llm to use _invoke_openai_callable |
src/guardrails/checks/text/hallucination_detection.py |
Updates hallucination_detection to use _invoke_openai_callable |
tests/unit/checks/test_llm_base.py |
Adds test for sync client support in run_llm |
tests/unit/checks/test_prompt_injection_detection.py |
Adds test for sync response handling in prompt injection detection |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
| else: | ||
| try: | ||
| from openai import AsyncAzureOpenAI, AzureOpenAI # type: ignore | ||
| except Exception: # pragma: no cover - optional dependency |
Copilot
AI
Oct 16, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The bare except Exception is too broad. Consider catching ImportError specifically since this is handling an optional dependency import.
| except Exception: # pragma: no cover - optional dependency | |
| except ImportError: # pragma: no cover - optional dependency |
| def create(self, **kwargs: Any) -> Any: | ||
| _ = kwargs |
Copilot
AI
Oct 16, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The unused variable assignment _ = kwargs is unnecessary. If the intention is to indicate kwargs are intentionally unused, this can be omitted as Python allows unused parameters.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is just in a test, I think it's fine
|
|
||
| class _SyncResponses: | ||
| def parse(self, **kwargs: Any) -> Any: | ||
| _ = kwargs |
Copilot
AI
Oct 16, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The unused variable assignment _ = kwargs is unnecessary. If the intention is to indicate kwargs are intentionally unused, this can be omitted as Python allows unused parameters.
| _ = kwargs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is just in a test, I think it's fine
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you!
Fixes reported bug where using a sync client caused a
TypeError: object ChatCompletion can't be used in 'await' expressionerror when LLM based guardrails are executed.Changes:
_invoke_openai_callableadded to properly handle sync and async traffic from all api endpoints we support