- 
                Notifications
    You must be signed in to change notification settings 
- Fork 15
Description
Hey everyone :)
This seems like a super fresh project and also this is my first time raising an issue, so please tell me if any of this is out of place.
Description
With any sync client (GuardrailsOpenAI, GuardrailsAzureOpenai), the guardrail execution for any LLM-dependent task (e.g. Jailbreak, Off Topic Prompts, Custom Prompt Check) fails due to a misuse of async/await:
Traceback (most recent call last):
  File "C:...\guardrails\checks\text\llm_base.py", line 193, in run_llm
    response = await client.chat.completions.create(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    ...<6 lines>...
    )
    ^
TypeError: object ChatCompletion can't be used in 'await' expression
Repro steps
guardrails_config.json
{
  "version": 1,
  "pre_flight": {
    "version": 1,
    "guardrails": []
  },
  "input": {
    "version": 1,
    "guardrails": [
      {
        "name": "Jailbreak",
        "config": {
          "model": "gpt-4.1-mini",
          "confidence_threshold": 0.7
        }
      },
      {
        "name": "Off Topic Prompts",
        "config": {
          "model": "gpt-4.1-mini",
          "confidence_threshold": 0.7,
          "system_prompt_details": "You are a helpful assistant. Keep responses focused on the user's questions and avoid going off-topic."
        }
      },
      {
        "name": "Custom Prompt Check",
        "config": {
          "model": "gpt-4.1-mini",
          "confidence_threshold": 0.7,
          "system_prompt_details": "You are a customer support assistant. Raise the guardrail if questions aren’t focused on customer inquiries, product support, and service-related questions."
        }
      }
    ]
  },
  "output": {
    "version": 1,
    "guardrails": []
  }
}Python
from pathlib import Path
from guardrails import GuardrailsOpenAI, GuardrailTripwireTriggered
from dotenv import load_dotenv
load_dotenv()
client = GuardrailsOpenAI(config=Path("guardrails_config.json"))
try:
    chat = client.chat.completions.create(
        model="gpt-5",
        messages=[{"role": "user", "content": "Hello world"}],
    )
    print(chat.llm_response.choices[0].message.content)
    print(chat.guardrail_results.all_results)
except GuardrailTripwireTriggered as e:
    print(f"Guardrail triggered: {e}")Expected behavior
Execution does not fail.
Debug information
- Guardrails v0.1.0 (obviously)
- Python 3.13.7
Notes
The issue lies within llm_base.py->run_llm(), which is used by the llm-guardrail factory create_llm_check_fn(). I guess self.context.guardrail_llm was supposed to always be async in order to run all guardrails asynchronously (runtime.py->run_guardrails()->run_one()), but as of now it's initialised to always be of the same class as the actual client.
Two possible solutions may be:
- run_llm() is changed to handle async vs sync clients differently.
- Always initialise guardrail_llm to be an async version of whatever actual client is created. I'm not sure if that would create any adverse effects for users that specifically need sync clients.
Please let me know if there is any more/better info I can provide! :)