CLI tool that uses the Lakera API to perform security checks in LLM inputs
-
Updated
Mar 13, 2024 - Python
CLI tool that uses the Lakera API to perform security checks in LLM inputs
Prompt Engineering Tool for AI Models with cli prompt or api usage
Neural networks, but malefic! 😈
Uncertainty guided Federated Learning
Official code for paper: Z. Zhang, X. Wang, J. Huang and S. Zhang, "Analysis and Utilization of Hidden Information in Model Inversion Attacks," in IEEE Transactions on Information Forensics and Security, doi: 10.1109/TIFS.2023.3295942
Official Implementation of IEEE TIFS paper Odyssey: Creation, Analysis and Detection of Trojan Models
Discover and inventory the SaaS applications used across your organization by intelligently analyzing incoming Gmail emails, providing valuable insights into your SaaS landscape.
Manage and use pre-trained deep neural networks with a common interface for build, compile, fit, evaluate, kfold, cross validate, and predict lifecycle phases using Keras and Tensorflow
AiShields is an open-source Artificial Intelligence Data Input and Output Sanitizer
GeminiHacker is a Python script designed to harness the power of a generative AI model for security research, bug bounty hunting, and vulnerability scanning. This README.md file provides detailed instructions on how to install, configure, and use the script effectively.
dga domain detected by lstm model
The implementation of our paper 'Visual Privacy Protection via Mapping Distortion', accepted by the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2021.
AntiNex python client for training and using pre-trained deep neural networks with JWT authentication
Neural networks, but malefic! 😈
Datasets for training deep neural networks to defend software applications
Evaluation & testing framework for computer vision models
The official implementation of USENIX Security'23 paper "Meta-Sift" -- Ten minutes or less to find a 1000-size or larger clean subset on poisoned dataset.
Image Prompt Injection is a Python script that demonstrates how to embed a secret prompt within an image using steganography techniques. This hidden prompt can be later extracted by an AI system for analysis, enabling covert communication with AI models through images.
The Prompt Injection Testing Tool is a Python script designed to assess the security of your AI system's prompt handling against a predefined list of user prompts commonly used for injection attacks. This tool utilizes the OpenAI GPT-3.5 model to generate responses to system-user prompt pairs and outputs the results to a CSV file for analysis.
Add a description, image, and links to the ai-security topic page so that developers can more easily learn about it.
To associate your repository with the ai-security topic, visit your repo's landing page and select "manage topics."