👤 Authored by: Alejandro Gabriel Guerrero | gbrayhan@gmail.com
This manifesto outlines the core beliefs and guiding principles for designing, implementing, and maintaining AI-driven systems with integrity, security, and human oversight at their foundation.
Designing, maintaining, and monitoring moderately complex systems still demands deep expertise in software principles, architecture, and system design. As AI reshapes how we build software, many long‑standing patterns will need to evolve—and understanding which ones and why takes advanced analysis and experience.
AI is, at its core, a pattern‑replicating engine—no will, no initiative, no consciousness. Any “drive” it seems to show comes from us. Big vendors’ sensational claims are often marketing, not magic. Remember Gödel’s incompleteness theorem: some truths are unprovable (and thus not computable), so algorithmic systems simply can’t cover everything. Consciousness remains beyond computation.
That said, AI can supercharge truly deterministic workflows. With vigilant engineers securing and supervising every change, we can cut development times and deliver safer, more stable, and more efficient products.
AI is a pattern-replicating engine—devoid of will, initiative, or consciousness. It responds to the data and instructions we provide. Any appearance of intelligence or intent is the result of careful human design and supervision, not inherent AI capability. As we integrate AI into critical workflows, we must balance its power with rigorous engineering discipline.
- Human Agency: AI has no intrinsic drive or goals. All initiative comes from human intent. 🧑💻
- Security First: AI systems are high-value targets; security must be baked in at every stage. 🔒
- Transparency: Every AI action—input, output, and decision—should be logged for auditability. 📋
- Resilience Through Redundancy: AI can fail without warning. Plan backups and fallbacks. 🔄
- Simplicity is Strength: Complex pipelines increase risk; simplicity reduces it. ✨
- Gödel’s Reminder: Some truths are unprovable—and thus beyond computation. Recognize the limits of algorithms. 📐
- Intercept & Emulate 🛠️
Capture all external outputs to create faithful emulations for stress-testing. - Single File Responsibility 📁
Consolidate each AI driver into one file to simplify review and control. - Isolate Sensitive Layers 🛡️
Separate critical data/processes into dedicated modules or projects. - KISS (Keep It Simple, Stupid) 🤓
Embrace simplicity to minimize risk and maximize maintainability. - Embrace Repetition 🔁
Duplication is acceptable—ensure every scenario is covered by tests. - Revised Test Pyramid 🔼
- User acceptance tests 2) Integration tests 3) Unit tests.
- Stable Structure 🏗️
Lock down a minimal, stable project layout; avoid constant reorganizations. - Restrict AI Edits 🚫
Explicitly block AI from modifying high-risk files or performing dangerous operations. - Backup First 💾
Always back up code and data before executing AI-driven workflows. - Continuous Monitoring 📈
Implement real-time alerts and dashboards to catch silent failures. - Security by Design 🛡️
Treat AI components as high-risk: enforce rigorous access control and vulnerability scanning. - Human Oversight 👀
Every AI-driven change must be reviewed and approved by a qualified engineer. - Transparent Pipelines 🔍
Log each input, output, and derived action for full traceability. - Audit Trails 🕵️♂️
Regularly review AI logs to detect subtle issues before they escalate. - Secondary AI Supervision 🤖🔎
Consider a “watchdog” AI to validate decisions and catch anomalies.
💬 We commit to responsible AI: leveraging its capabilities while safeguarding security, transparency, and human values.
© 2025 Alejandro Gabriel Guerrero