┌─────────────────────────────────────────────────────────────────────────┐
│ > Tech enthusiast building things that scale │
│ > 12 years across Amazon · Microsoft · McAfee │
│ > From XDR threat correlation to LLM inference pipelines │
│ > Currently hacking away in Berlin 🏗️ │
└─────────────────────────────────────────────────────────────────────────┘
fn me() -> Enthusiast {
Enthusiast {
focus: vec!["LLM Tooling", "Distributed Systems", "Creative Coding"],
security: vec!["XDR", "Threat Correlation", "Malware Analysis"],
building: vec!["AI-native products", "Agent architectures", "Local LLM tooling"],
location: "Berlin, DE 🇩🇪",
}
}A swarm of specialised AI agents that autonomously investigate security incidents — from raw threat signal to incident report — without human intervention.
Multi-agent system where each agent has a distinct role, memory, toolset, and LLM — coordinated via an event-sourced Akka Typed actor hierarchy. Built on top of Anthropic and OpenAI APIs with a model-agnostic provider abstraction.
Visual workflow builder for designing, testing, and running multi-agent pipelines against live LLM providers — entirely on your machine.
React studio + FastAPI runtime for orchestrating multi-agent workflows visually. Supports 14+ providers (Anthropic, OpenAI, Bedrock, Gemini, Mistral, Ollama and more), MCP server integration, versioned workflow assets, and live run inspection — all bound to localhost by default with encrypted credential storage.
Rust chat server with function calling against a local LLM —
/chatand/searchendpoints, zero cloud dependency.
Scala/ZIO client for Ollama with tool calling — web search, webpage extraction, and Python code execution as LLM tools. Streaming supported.