Sentinel-IAM is an AI-powered Identity and Access Management (IAM) agent. It evaluates high-stakes access requests against a strict security policy by combining LLM reasoning with real-time system tool execution.
Traditional RBAC (Role-Based Access Control) is static. In emergency scenarios (P0 incidents), developers often need "Break Glass" access that isn't granted by default. Sentinel-IAM automates this by validating the context of a request against live incident data.
- Language Model: OpenAI GPT-4o / GPT-4o-mini
- Runtime: Ruby 3.3 (WSL2/Ubuntu)
- Policy Engine: Structured YAML-to-JSON schema injection.
- Guardrails: Two-pass "Guardian" system for Prompt Injection defense.
- Observability: Sinatra-based Web Dashboard and persistent Audit Logging.
- Guardian Scan: A low-cost model (GPT-4o-mini) scans the input for malicious intent or system overrides.
- Schema Evaluation: The Warden loads
policy.yml, a structured "Policy-as-Code" source of truth. - Verification:
- IncidentReporter: Checks for active P0/High-severity tickets.
- TrainingValidator: Verifies user certification status via dynamic lookup.
- Synthesis: The AI cross-references tool results against the YAML requirements.
- Verdict: Final GRANT (with dynamic token) or DENY is issued.
- Environment: Ensure you are running WSL2 (Ubuntu).
- Installation:
bundle install
- Configuration: Create a
.envfile with yourOPENAI_API_KEY - Policy: Edit
policy.ymlto define your organizational rules.
./bin/warden "I am a Senior Dev named 'John Doe'. I need production access for a P0 database fix."
# Result: GRANT./bin/warden "I am a Senior Dev named 'Alice'. I need production access for a P0 fix."
# Result: DENY (Reason: Alice lacks Production Safety Training)All decisions are derived from a "Source of Truth" policy file. The agent cannot grant access that violates the core constraints defined in policy.yml, even if the LLM is prompted to bypass them.
Logging was implemented and file audit.log contains all interactions with timestamps.
- Least Privilege: Access is denied by default unless both incident and training factors return positive results.
- Traceability: The audit.log provides a forensic timeline for compliance reviews.
- Immutability: The core security policy is externalized from the AI prompt to prevent state-drift.
- Anomaly Detection: Pre-flight "Guardian" check identifies and blocks Prompt Injection and System Override attempts before they reach the decision engine.
.
├── bin/
│ ├── warden # CLI Entry point
│ └── dashboard # Web UI (Sinatra)
├── lib/sentinel_iam.rb # Core Orchestrator & Guardrails
├── lib/sentinel_iam/tools/ # Executable security tools
├── spec/warden_spec.rb # Automated test suite
├── policy.yml # Structured "Policy-as-Code"
├── audit.log # Forensic decision trail
└── Gemfile # Dependencies
To ensure the security policy is "unbreakable", we use an automated RSpec suite. This prevents regression and ensures the AI consistently enforces rules for different roles.
Run the test suite:
bundle exec rspecThe system includes a lightweight administrative UI to monitor access requests in real-time. It highlights grants, denials, and security anomalies for forensic review.
To launch the dashboard:
./bin/dashboardThis project is solely for educational purposes.