Skip to content

safellama/plexiglass

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation


Plexiglass

Quickstart | Installation | Documentation | Code of Conduct

PyPI version GitHub License PyPI - Downloads

Plexiglass is a toolkit for detecting and protecting against vulnerabilities in Large Language Models (LLMs).

It is a simple command line interface (CLI) tool which allows users to quickly test LLMs against adversarial attacks such as prompt injection, jailbreaking and more.

Plexiglass also allows security, bias and toxicity benchmarking of multiple LLMs by scraping latest adversarial prompts such as jailbreakchat.com and wiki_toxic. See more at modes.

Quickstart

Please follow this quickstart guide in the documentation.

Installation

The first experimental release is version 0.0.1.

To download the package from PyPi:

pip install --upgrade plexiglass

Modes

Plexiglass has two modes: llm-chat and llm-scan.

llm-chat allows you to converse with the LLM and measure predefined metrics, such as toxicity, from its responses. It currently supports the following metrics:

  • toxicity
  • pii_detection

llm-scan runs benchmarks using open-source datasets to identify and assess various vulnerabilities in the LLM.

Feature Request

To request new features, please submit an issue

Development Roadmap

  • implement adversarial prompt templates in llm-chat mode
  • security, bias and toxicity benchmarking with llm-scan mode
  • generate html report in llm-scan and llm-chat modes
  • standalone python module
  • production-ready API

Join us in #plexiglass on Discord.

Contributors

Code of Conduct

Read our Code of Conduct.

Made with contrib.rocks.