Skip to content

safeguards-ai/safeguards-shield

Repository files navigation

🛡️ Safeguards Shield

License Python 3.7+ Code style: black PyPI - Python Version Downloads

plot

Safeguards Shield is a developer toolkit to use LLMs safely and securely. Our Shield safeguards prompts and LLM interactions from costly risks to bring your AI app from prototype to production faster with confidence.

Our Shield wraps your GenAI apps with a protective layer, safeguarding malicious inputs and filtering model outputs. Our comprehensive toolkit has 20+ out-of-the-box detectors for robust protection of your GenAI apps in workflow.

Benefits

  • 🚀 mitigate LLM reliability and safety risks
  • 📝 customize and ensure LLM behaviors are safe and secure
  • 💸 monitor incidents, costs, and responsible AI metrics

Features

  • 🛠️ shield that safeguards against costly risks like toxicity, bias, PI
  • 🤖 reduce and measure ungrounded additions (hallucinations) with tools
  • 🛡️ multi-layered defense with heuristic detectors, LLM-based, vector DB