A lightweight visual workflow builder for LLM pipelines. Compose workflows from nodes (Text Input β LLM β Output) and run them locally. Frontend is React + React Flow; backend is Flask with an Ollama proxy endpoint.
- Highlights
- Quick Start
- Frontend Overview
- Backend Overview
- Project Structure
- Development Notes
- Roadmap (short)
- Node-based canvas with connectors and labels
- Consistent node shell (NodeShell) with header, connectors and controls
- Implemented nodes:
- TextInputNode: emits user text
- SettingsNode: provides config(url, model) to other nodes
- OllamaNode (mock): displays config/prompt info (no backend call yet)
- OutputNode: shows rendered text/markdown with autosize/expand
 
- Backend Flask API with /api/ollama/chatproxy and health endpoints
- Docker compose for local full-stack
Prereqs: Node.js 18+, Python 3.9+, (optional) Docker
Frontend
git clone https://github.com/davy1ex/pipelineLLM
cd frontend
npm install
npm run dev
# http://localhost:5173Backend
cd backend
pip install -r requirements.txt
python server.py
# http://localhost:5000Docker (frontend + backend + nginx)
docker compose -f docker/docker-compose.yml up --build- Stack: React 18/19, TypeScript, Vite, React Flow
- Entry: frontend/src/app/main.tsx, page:frontend/src/pages/workflow/WorkFlowPage.tsx
- Canvas feature: frontend/src/features/canvas
- Execution (WIP): frontend/src/features/workflow-execution
- Nodes: frontend/src/entities/nodes/*- Text Input: text-input/TextInputNode.tsx
- Settings: settings/SettingsNode.tsx
- Ollama (mock): ollama/OllamaNode.tsx
- Output: output/OutputNode.tsx
 
- Text Input: 
frontend/src/shared/ui/NodeShell.tsx
- Props: title,headerActions?,width?,controls?,connectors?
- connectors: array of- { id?, type: 'source'|'target', position, label, dataType? }
- Renders connectors as rows: handle (+ color by data type) + text label
- Controls: compact rows with optional editing and value view
- Renders plain text or markdown (markdown-it)
- Expand button toggles height clamp; autosize computes width/height to fit content
- Long content wraps at ~1200px; horizontal overflow is avoided
Docs (frontend)
- frontend/docs/ARCHITECTURE.md
- frontend/docs/NODES_AND_EDGES.md
- frontend/docs/WORKFLOW_STORE.md
backend/server.py (Flask + CORS)
Endpoints
- GET /api/healthβ basic health
- GET /api/hello?name=...β sample
- POST /api/dataβ echo
- POST /api/ollama/chatβ proxy to Ollama- /api/generate- Body: { url?, model?, prompt, system?, temperature? }
- Normalizes Ollama URL; maps localhost to host.docker.internalfor Docker
 
- Body: 
Config
- Example env: backend/env.example
- Run: python server.py(port 5000)
pipelineLLM/
βββ backend/
β   βββ server.py
β   βββ requirements.txt
β   βββ Dockerfile
βββ docker/
β   βββ docker-compose.yml
β   βββ nginx/
βββ frontend/
β   βββ src/
β   β   βββ app/
β   β   βββ entities/
β   β   β   βββ nodes/
β   β   βββ features/
β   β   βββ pages/
β   β   βββ shared/
β   βββ package.json
β   βββ vite.config.ts
βββ README.md
- Connectors are rendered inside NodeShell(not via absolute-positioned React Flow handles anymore). Pass connectors as an array on each node component (seeOllamaNode.tsx,SettingsNode.tsx,TextInputNode.tsx,OutputNode.tsx).
- Handle color is derived from data type via shared/lib/dataTypes.ts.
- The canvas store and execution store live under features/canvasandfeatures/workflow-execution.
- add python block
- add block for reading file
- add block to zettelkasten notetaking from ollama block
