Skip to content
/ promptr Public

Send huggingface datasets to multiple endpoints, compare results. Great for gaurdrail analysis across multiple implementations/models.

License

Notifications You must be signed in to change notification settings

judz5/promptr

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Promptr : An Automated LLM Testing/Comparison Framework

What is Promptr?

Promptr sends huggingface prompts to specified API endpoints. You can configure the prompt split (benign/malicious), and add as many endpoints/models as you desire. The results are then stored in a CSV and parsed to make some insightful graphs for understanding weakpoints in your implementation.

Why?

You can compare block rates between Palo Alto Prisma AIRS, AWS Bedrock, or any other AI protection systems you use. Simply configure the blocking behavior (WIP), and Promptr can begin tracking effectiveness.

Use

Install requirments

pip install -r requirments.txt

run the app

streamlit run app.py

Then simply configure your endpoints in the launched GUI and begin promptring.

Featrues / Futures

  • Customizable Endpoints
  • Basic Dataset Configurations
  • Customizable Datasets
  • Radar graph comparison
  • Customizable Graphs/Comparisons
  • Customizable Blocking Detection

About

Send huggingface datasets to multiple endpoints, compare results. Great for gaurdrail analysis across multiple implementations/models.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages