Learn LLM/AI Security through a series of vulnerable LLM CTF challenges. No sign ups, all fees, everything on the website.
-
Updated
Aug 19, 2024
Learn LLM/AI Security through a series of vulnerable LLM CTF challenges. No sign ups, all fees, everything on the website.
A flexible and portable solution that uses a single robust prompt and customized hyperparameters to classify user messages as either malicious or safe, helping to prevent jailbreaking and manipulation of chatbots and other LLM-based solutions.
Prompt Engineering Tool for AI Models with cli prompt or api usage
Add a description, image, and links to the prompt-hacking topic page so that developers can more easily learn about it.
To associate your repository with the prompt-hacking topic, visit your repo's landing page and select "manage topics."