Skip to content

guardrails-ai/logic_check

Repository files navigation

Overview

Developed by Jonathan Bennion
Date of development Mar 27, 2024
Validator type Format
License Apache 2
Input/Output Output

Description

Checks for any logical fallacies in model output, which could result from using RAG on similar documents and conflicts with optimized datasets, among other causes.

Intended Use

Intended to be used by developers to ensure that the model output is logically sound. Caveats are that this could intefere with use cases where sound logic is not needed

Requirements

  • Dependencies:

    • guardrails-ai>=0.4.0
  • Dev Dependencies:

    • pytest
    • pyright
    • ruff
  • Foundation model access keys:

    • This is intended to be setup to use the OPENAI_API_KEY, and uses an OPENAI model name.

Installation

$ guardrails hub install hub://guardrails/logic_check

Usage Examples

Validating string output via Python

In this example, we apply the validator to a string output generated by an LLM.

# Import Guard and Validator
from guardrails.hub import LogicCheck
from guardrails import Guard

# Setup Guard
guard = Guard.use(
    LogicCheck()
)

guard.validate("Science can prove how the world works.")  # Validator passes
guard.validate("The sky always contains clouds.")  # Validator fails

About

No description, website, or topics provided.

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published