This project enables Langfuse tracing for LLMs running via Ollama, integrated into n8n as a custom node. It logs all prompt/response interactions to Langfuse for observability and debugging.
- Node.js >=18
- pnpm
- Docker + Docker Compose
- Git
n8n-nodes-ollama-langfuse/
├── docker-compose.yml # Main app (n8n + custom nodes)
├── Dockerfile # Optional build file for n8n
├── Langfuse/docker-compose.yml # Langfuse tracing backend
├── n8n-custom/custom/ # Custom nodes code
│ ├── nodes/ # Custom nodes logic(Ollama/LmChatOllama, Ollama/handler)
│ ├── credentials/ # Custom credential definitions
│ ├── dist/ # Compiled JS and flattened icons
│ ├── gulpfile.js # Icon processor
│ ├── package.json # Node definitions
└── n8n-data/ # n8n runtime data (volume)
npm install -g pnpm
cd n8n-custom/custom
pnpm install
pnpm build
This compiles your code and copies icons into
dist/
.
cd Langfuse
docker compose up -d
Access Langfuse UI at: http://localhost:3001
docker compose up --build
If you update node logic, re-run
pnpm build
and restart Docker.
- Add Langfuse and Ollama credentials in n8n
- Use
Prompt Template → LLM Chain → Ollama LLM
- Run the workflow
- View logs in Langfuse UI
pnpm build
docker compose down
docker compose up --build
MIT