An intentionally vulnerable AI chatbot to learn and practice AI Security.
-
Updated
Apr 15, 2024 - HTML
An intentionally vulnerable AI chatbot to learn and practice AI Security.
GPT 2 model trained on fake PII to study PII leakage from large language models
A secure, AI-enhanced file scanning tool built on Flask, strengthened with ClamAV and PDF analysis, designed to vigilantly detect digital threats and potential vulnerabilities.
Website Prompt Injection is a concept that allows for the injection of prompts into an AI system via a website's. This technique exploits the interaction between users, websites, and AI systems to execute specific prompts that influence AI behavior.
A curated list of useful resources that cover Offensive AI.
Add a description, image, and links to the ai-security topic page so that developers can more easily learn about it.
To associate your repository with the ai-security topic, visit your repo's landing page and select "manage topics."