LLM Security Project with Llama Guard
-
Updated
Feb 18, 2024 - Python
LLM Security Project with Llama Guard
This repository is the official implementation of the paper "ASSET: Robust Backdoor Data Detection Across a Multiplicity of Deep Learning Paradigms." ASSET achieves state-of-the-art reliability in detecting poisoned samples in end-to-end supervised learning/ self-supervised learning/ transfer learning.
Prompt Engineering Tool for AI Models with cli prompt or api usage
This research exploring [Research Idea in a few words]. This work [Specific benefit of research] holds promise for [Positive impact]. This research been led by Dr.Samer Khamaiseh and wth ongoing efforts of Deirdre Jost and Steven Chiacchira
An interactive CLI application for interacting with authenticated Jupyter instances.
AiShields is an open-source Artificial Intelligence Data Input and Output Sanitizer
Image Prompt Injection is a Python script that demonstrates how to embed a secret prompt within an image using steganography techniques. This hidden prompt can be later extracted by an AI system for analysis, enabling covert communication with AI models through images.
ComPromptMized: Unleashing Zero-click Worms that Target GenAI-Powered Applications
Add a description, image, and links to the aisecurity topic page so that developers can more easily learn about it.
To associate your repository with the aisecurity topic, visit your repo's landing page and select "manage topics."