| Developed by | Jonathan Bennion |
|---|---|
| Date of development | Mar 27, 2024 |
| Validator type | Format |
| License | Apache 2 |
| Input/Output | Output |
Checks for any logical fallacies in model output, which could result from using RAG on similar documents and conflicts with optimized datasets, among other causes.
Intended to be used by developers to ensure that the model output is logically sound. Caveats are that this could intefere with use cases where sound logic is not needed
-
Dependencies:
- guardrails-ai>=0.4.0
-
Dev Dependencies:
- pytest
- pyright
- ruff
-
Foundation model access keys:
- This is intended to be setup to use the OPENAI_API_KEY, and uses an OPENAI model name.
$ guardrails hub install hub://guardrails/logic_checkIn this example, we apply the validator to a string output generated by an LLM.
# Import Guard and Validator
from guardrails.hub import LogicCheck
from guardrails import Guard
# Setup Guard
guard = Guard.use(
LogicCheck()
)
guard.validate("Science can prove how the world works.") # Validator passes
guard.validate("The sky always contains clouds.") # Validator fails