Promptr sends huggingface prompts to specified API endpoints. You can configure the prompt split (benign/malicious), and add as many endpoints/models as you desire. The results are then stored in a CSV and parsed to make some insightful graphs for understanding weakpoints in your implementation.
You can compare block rates between Palo Alto Prisma AIRS, AWS Bedrock, or any other AI protection systems you use. Simply configure the blocking behavior (WIP), and Promptr can begin tracking effectiveness.
Install requirments
pip install -r requirments.txtrun the app
streamlit run app.pyThen simply configure your endpoints in the launched GUI and begin promptring.
- Customizable Endpoints
- Basic Dataset Configurations
- Customizable Datasets
- Radar graph comparison
- Customizable Graphs/Comparisons
- Customizable Blocking Detection