๐ง Under Active Development: A professional orchestration and AI-driven observability platform. Note this project is still in active development and is not ready for production use.
DynObs is a modernization of container orchestration, bridging the gap between raw container management and intelligent observability. Unlike standard dashboards that simply show logs, DynObs attempts to understand them using local Large Language Models (LLMs).
It offers a "Single Pane of Glass" to deploy complex stacks (Orchestration) and monitor their health through AI-powered log analysis (Observability).
The original prototype utilized a split architecture (Python for Podman/AI, Java for API). We migrated to a Unified Java Backend for the following reasons:
- Architecture Simplicity: Consolidating to a single Spring Boot application reduces operational overhead and deployment complexity.
- Strong Typing: Leveraging Java's type safety prevents a class of runtime errors present in the dynamic Python scripts.
- Concurrency: Spring Boot's robust thread management is better suited for handling simultaneous log streams and HTTP requests than the previous synchronous Python implementation.
- Ecosystem: The
docker-javalibrary provides a more mature and stable interface for Podman socket interaction on Windows compared to the Python alternatives tested.
- React + Vite: Chosen for near-instant hot-reloading and component reusability.
- Dark Mode & Glassmorphism: Implemented to provide a premium, "Day-2 Operations" feel suitable for engineering tools.
- 1-Click Orchestration: Deploy multi-container setups (e.g., Nginx + Redis) instantly.
- Live Monitoring: Real-time tracking of active containers via Podman socket integration.
- ๐ง AI Log Analysis: On-demand inspection of container logs using Ollama (Llama 3.2) to detect anomalies and explain errors in plain English.
- Responsive Dashboard: A unified view for command, control, and insights.
Follow these steps to spin up the entire platform locally.
- Java JDK 17+
- Node.js 18+
- Podman Desktop (or CLI installed and machine running).
- Ollama: Installed and running.
- Pull the model:
ollama pull llama3.2:3b
- Pull the model:
The backend acts as the orchestrator. It connects to both Podman and Ollama.
# 1. Navigate to the app directory
cd app
# 2. Configure Podman Socket (Windows)
# This tells the Java app where to find the Podman machine
$env:DOCKER_HOST="npipe:////./pipe/podman-machine-default"
# 3. Build and Run
mvn clean install -DskipTests
mvn spring-boot:runServer will start on http://localhost:8080
To verify observability, spin up some "noise" containers using the provided helper script.
# Open a new terminal in the project root
.\start_demo_containers.ps1This starts Nginx, Redis, and a "Loop" container that generates logs.
The dashboard UI.
# 1. Navigate to frontend
cd frontend
# 2. Install Dependencies
npm install
# 3. Start Dev Server
npm run devAccess the dashboard at http://localhost:5173
When you click "Deploy" in the dashboard:
- Frontend sends a request to the Java Backend (
POST /api/orchestrate/web-stack). - Backend (
OrchestrationService) receives the command. - Backend acts as a controller, using the
docker-javalibrary to talk directly to your local Podman Socket. - It instructs Podman to pull images (
nginx,redis) and start containers from scratch.- Note: This mimics a real-world orchestrator like Kubernetes, but runs locally on your machine.
To fully test the AI Analysis features, you need a container that generates logs. We provide a script for this:
Option A: The Full Demo Script (Recommended)
This spins up nginx, redis, and a special dynobs-alpine container that prints logs for the AI to analyze.
# Run from project root
.\start_demo_containers.ps1Option B: Dashboard Orchestration You can also start containers directly from the UI:
- Go to the Dashboard.
- Click "Deploy" on the "Web Stack" card.
- Watch the "System Logs" panel as the Backend instructs Podman to start Nginx and Redis.
The "Active Containers" panel polls the backend every 5 seconds to show whatever is currently running on your Podman machine.
- Ensure
dynobs-alpineis running (use Option A above). - Find it in the "Active Containers" list.
- Click the "Analyze AI" button.
- The Backend fetches the last 5 lines of logs, sends them to Ollama, and streams the explanation to the "System Logs" panel.
- Framework: Spring Boot 3
- Container API:
docker-java(interacting with Podman) - AI Integration:
RestTemplateconnecting to local Ollama API
- Core: React 18, Vite
- Styling: Tailwind CSS
- Animation: Framer Motion
- Icons: Lucide React
We plan to introduce a Drag-and-Drop interface for PDF ingestion.
- Goal: Allow engineers to upload architecture diagrams or runbooks (PDFs).
- Mechanism: The backend will vectorize these documents to provide "Context Engineering" for the LLM.
- Result: The AI won't just analyze generic logs; it will cross-reference errors against your specific service documentation to suggest highly relevant fixes.
While the current version uses local Podman for simplicity, the long-term vision is to support Kubernetes.
- This will allow DynObs to orchestrate clusters across multiple nodes, moving from a "Local Dev Tool" to a "Production Ops Platform".
Created by Mahed Javed - 2026 - for any queries & feedback please get in touch: mahed95@gmail.com