Shrike Guard is a Python SDK for the Shrike Security platform. It wraps OpenAI, Anthropic (Claude), and Google Gemini clients to automatically scan all prompts for security threats before they reach the LLM — using the same 10-layer detection pipeline that powers the MCP server, REST API, and LLM Proxy Gateway.
- Drop-in replacement for OpenAI, Anthropic, and Gemini clients
- Automatic prompt scanning for:
- Prompt injection attacks
- PII/sensitive data leakage
- Jailbreak attempts
- SQL injection
- Path traversal
- Malicious instructions
- Fail-safe modes: Choose between fail-open (default) or fail-closed behavior
- Async support: Works with both sync and async clients
- Zero code changes: Just replace your import
Shrike's backend runs a multi-stage detection pipeline with security rules across 7 compliance frameworks:
| Framework | Coverage |
|---|---|
| GDPR | EU personal data — names, addresses, national IDs |
| HIPAA | Protected health information (PHI) |
| ISO 27001 | Information security — passwords, tokens, certificates |
| SOC 2 | Secrets, credentials, API keys, cloud tokens |
| NIST | AI risk management (IR 8596), cybersecurity framework (CSF 2.0) |
| PCI-DSS | Cardholder data — PAN, CVV, expiry, track data |
| WebMCP | MCP tool description injection, data exfiltration |
Plus built-in detection for prompt injection, jailbreaks, social engineering, and dangerous requests.
Detection depth depends on your tier. All tiers get the same SDK wrappers — tiers control which backend layers run.
| Anonymous | Community | Pro | Enterprise | |
|---|---|---|---|---|
| Detection Layers | L1-L5 | L1-L7 | L1-L8 | L1-L9 |
| API Key | Not needed | Free signup | Paid | Paid |
| Rate Limit | — | 10/min | 100/min | 1,000/min |
| Scans/month | — | 1,000 | 50,000 | 1,000,000 |
Anonymous (no API key): Pattern-based detection (L1-L5). Community (free): Adds LLM-powered semantic analysis. Register at shrikesecurity.com/signup — instant, no credit card.
pip install shrike-guard # OpenAI (included by default)
pip install shrike-guard[anthropic] # + Anthropic Claude
pip install shrike-guard[gemini] # + Google Gemini
pip install shrike-guard[all] # All providersfrom shrike_guard import ShrikeOpenAI
# Replace 'from openai import OpenAI' with this
client = ShrikeOpenAI(
api_key="sk-...", # Your OpenAI API key
shrike_api_key="shrike-...", # Your Shrike API key
)
# Use exactly like the regular OpenAI client
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello, how are you?"}]
)
print(response.choices[0].message.content)from shrike_guard import ShrikeAnthropic
client = ShrikeAnthropic(
api_key="sk-ant-...",
shrike_api_key="shrike-...",
)
response = client.messages.create(
model="claude-sonnet-4-5-20250929",
max_tokens=1024,
messages=[{"role": "user", "content": "Hello!"}]
)
print(response.content[0].text)from shrike_guard import ShrikeGemini
client = ShrikeGemini(
api_key="AIza...",
shrike_api_key="shrike-...",
)
model = client.GenerativeModel("gemini-pro")
response = model.generate_content("Hello!")
print(response.text)import asyncio
from shrike_guard import ShrikeAsyncOpenAI
async def main():
client = ShrikeAsyncOpenAI(
api_key="sk-...",
shrike_api_key="shrike-...",
)
response = await client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)
await client.close()
asyncio.run(main())Choose how the SDK behaves when the security scan fails (timeout, network error, etc.):
# Fail-open (default): Allow requests if scan fails
# Best for: Most applications where availability is important
client = ShrikeOpenAI(
api_key="sk-...",
shrike_api_key="shrike-...",
fail_mode="open", # This is the default
)
# Fail-closed: Block requests if scan fails
# Best for: Security-critical applications
client = ShrikeOpenAI(
api_key="sk-...",
shrike_api_key="shrike-...",
fail_mode="closed",
)client = ShrikeOpenAI(
api_key="sk-...",
shrike_api_key="shrike-...",
scan_timeout=2.0, # Timeout in seconds (default: 10.0)
)For self-hosted Shrike deployments:
client = ShrikeOpenAI(
api_key="sk-...",
shrike_api_key="shrike-...",
shrike_endpoint="https://your-shrike-instance.com",
)from shrike_guard import ScanClient
with ScanClient(api_key="shrike-...") as scanner:
# Scan SQL queries for injection attacks
sql_result = scanner.scan_sql("SELECT * FROM users WHERE id = 1")
if not sql_result["safe"]:
print(f"SQL threat: {sql_result['reason']}")
# Scan file paths for path traversal
file_result = scanner.scan_file("/app/data/output.csv")
# Scan file content for secrets/PII
content_result = scanner.scan_file("/tmp/config.py", "api_key = 'sk-...'")from shrike_guard import ShrikeOpenAI, ShrikeBlockedError, ShrikeScanError
client = ShrikeOpenAI(
api_key="sk-...",
shrike_api_key="shrike-...",
fail_mode="closed", # To see scan errors
)
try:
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Some prompt..."}]
)
except ShrikeBlockedError as e:
# Prompt was blocked due to security threat
print(f"Blocked: {e.message}")
print(f"Threat type: {e.threat_type}")
print(f"Confidence: {e.confidence}")
except ShrikeScanError as e:
# Scan failed (only raised with fail_mode="closed")
print(f"Scan error: {e.message}")For more control, use the scan client directly:
from shrike_guard import ScanClient
with ScanClient(api_key="shrike-...") as scanner:
result = scanner.scan("Check this prompt for threats")
if result["safe"]:
print("Prompt is safe!")
else:
print(f"Threat detected: {result['reason']}")- Python: 3.8+
- LLM SDKs:
- OpenAI SDK
>=1.0.0 - Anthropic SDK
>=0.18.0(optional:pip install shrike-guard[anthropic]) - Google Generative AI
>=0.3.0(optional:pip install shrike-guard[gemini])
- OpenAI SDK
- Works with:
- OpenAI API
- Azure OpenAI
- OpenAI-compatible APIs (Ollama, vLLM, etc.)
You can configure the SDK using environment variables:
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
export SHRIKE_API_KEY="shrike-..."
export SHRIKE_ENDPOINT="https://your-shrike-instance.com"| Scanned | Not Scanned |
|---|---|
| Input prompts (user messages) | Streaming output from LLM |
| System prompts | Image/audio content |
| Multi-modal text content | Non-chat API calls |
| SQL queries | |
| File paths and content |
Shrike Guard focuses on pre-flight protection — blocking malicious prompts BEFORE they reach the LLM. This:
- Prevents prompt injection attacks at the source
- Has zero latency impact on LLM responses
- Catches the vast majority of threats at the input layer
Shrike Guard is one of several ways to integrate with the Shrike Security platform:
- MCP Server —
npx shrike-mcp(GitHub) - TypeScript SDK —
npm install shrike-guard(GitHub) - REST API —
POST https://api.shrikesecurity.com/agent/scan - LLM Proxy Gateway — Change one URL, scan everything
- Browser Extension — Chrome/Edge for ChatGPT, Claude, Gemini
- Dashboard — shrikesecurity.com
Apache 2.0
- Shrike Security — Sign up, dashboard, docs
- GitHub Issues — Bug reports
- MCP Server — For MCP/agent integration
- TypeScript SDK — TypeScript equivalent