Skip to content

A Guardrails AI validator that checks whether an LLM-generated response contains gibberish

License

Notifications You must be signed in to change notification settings

argen666/gibberish_text

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Overview

Developed by Guardrails AI
Date of development Feb 15, 2024
Validator type Format
Blog -
License Apache 2
Input/Output Output

Description

This validator validates the "cleanliness" of the text generated by a language model. It uses a pre-trained model to determine if the text is coherent and not gibberish. The validator can be used to filter out text that is not coherent or does not make sense.

Requirements

  • Dependencies: nltk, transformers, torch

Installation

guardrails hub install hub://guardrails/gibberish_text

Usage Examples

Validating string output via Python

In this example, we use the gibberish_text validator on any LLM generated text.

# Import Guard and Validator
from guardrails.hub import GibberishText
from guardrails import Guard

# Use the Guard with the validator
guard = Guard().use(
    GibberishText, threshold=0.5, validation_method="sentence", on_fail="exception"
)

# Test passing response
guard.validate(
    "Azure is a cloud computing service created by Microsoft. It's a significant competitor to AWS."
)

try:
    # Test failing response
    guard.validate(
        "Floppyland love great coffee okay. Fox fox fox. Move to New York City."
    )
except Exception as e:
    print(e)

Output:

Validation failed for field with errors: The following sentences in your response were found to be gibberish:

- Floppyland love great coffee okay.
- Fox fox fox.

Note: See how only the first 2 sentences within the failing response are considered gibberish.

API Reference

__init__(self, threshold=0.5, validation_method='sentence', on_fail="noop")

    Initializes a new instance of the Validator class.

    Parameters:

    • threshold (float): The confidence threshold (model inference) for text "cleanliness". Defaults to 0.5.
    • validation_method (str): Whether to validate at the sentence level or over the full text. Must be one of sentence or full. Defaults to sentence
    • on_fail (str, Callable): The policy to enact when a validator fails. If str, must be one of reask, fix, filter, refrain, noop, exception or fix_reask. Otherwise, must be a function that is called when the validator fails.

__call__(self, value, metadata={}) -> ValidationResult

    Validates the given value using the rules defined in this validator, relying on the metadata provided to customize the validation process. This method is automatically invoked by guard.parse(...), ensuring the validation logic is applied to the input data.

    Note:

    1. This method should not be called directly by the user. Instead, invoke guard.parse(...) where this method will be called internally for each associated Validator.
    2. When invoking guard.parse(...), ensure to pass the appropriate metadata dictionary that includes keys and values required by this validator. If guard is associated with multiple validators, combine all necessary metadata into a single dictionary.

    Parameters:

    • value (Any): The input value to validate.
    • metadata (dict): A dictionary containing metadata required for validation. No additional metadata keys are needed for this validator.

About

A Guardrails AI validator that checks whether an LLM-generated response contains gibberish

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 98.6%
  • Makefile 1.4%