client = GuardrailsAsyncOpenAI(
config=JsonString(json.dumps(guardrails_config))
)
response = await client.chat.completions.create(
model=os.getenv("OPENAI_MODEL_NAME", "gpt-4.1-mini"),
messages=[{"role": "user", "content": req.input}],
suppress_tripwire=True
)
I've checked the response object, it doesn't contain the LLM usage cost for each task of Guardrails (although the guardrails already checked and the tripwire triggered successfully). It just show the tokens used for LLM response.
How can I get that value? Or it has not implemented yet is this version?
Thanks.