n8n Ollama Agents a collection of my used credentials and workflows with Ollama.
See INFO.md for upstream details.
- n8n: Low-code automation platform
- Ollama: Cross-platform LLM runner
- Qdrant: Vector database for AI applications
- PostgreSQL: Relational database for data storage
- Redis: In-memory data structure store, used for caching and session management
- Supabase: Open-source alternative to Firebase, used for real-time data sync and storage
- Setup: The Docker Compose file initializes all necessary services.
- Data Ingestion: Use n8n workflows to load data into Qdrant or Supabase.
- AI Processing: Leverage Ollama for local LLM inference within n8n workflows.
- Workflow Creation: Build custom AI agents and RAG systems using n8n's visual editor.
- Integration: Connect your AI workflows with external services and APIs.
- Execution: Run your workflows on-demand or on a schedule within the self-hosted environment.
When integrating with other services running on your local machine (outside of the Docker network), use the special DNS name host.docker.internal
instead of localhost
. This allows containers to communicate with services on your host machine.
For example, if you have a service running on port 3000 on your local machine, you would access it from within a container using:
http://host.docker.internal
- Local RAG AI Agent
- Supabase RAG AI Agent
- Base RAG AI Agent
- Demo Agent Workflow
- Qdrant Vector Store Loader
- Supabase Vector Store Loader
- Flux Image Generator
- Company Research Workflow
- Appointment Booking Agent
- LinkedIn Post Automation
- Reddit Trend Analysis
- Hacker News Insights
- News Aggregator
- Notion to LinkedIn Poster
- Siri Ollama Agent
docker compose up -d
Workflow and Credential backups are stored in ./backups
and can be restored using the n8n-restore
container.
git clone https://github.com/n8n-io/self-hosted-ai-starter-kit.git
cd self-hosted-ai-starter-kit
docker compose --profile gpu-amd up
docker compose up n8n-backup
If you're running OLLAMA locally on your Mac (not in Docker), you need to modify the OLLAMA_HOST environment variable in the n8n service configuration. Update the x-n8n section in your Docker Compose file as follows:
x-n8n: &service-n8n
# ... other configurations ...
environment:
# ... other environment variables ...
- OLLAMA_HOST=host.docker.internal:11434
Additionally, after you see "Editor is now accessible via: http://localhost:5678/":
- Head to http://localhost:5678/home/credentials
- Click on "Local Ollama service"
- Change the base URL to "http://host.docker.internal:11434/"
docker compose up n8n-restore
git clone https://github.com/n8n-io/self-hosted-ai-starter-kit.git cd self-hosted-ai-starter-kit docker compose --profile cpu up
## ⚡️ Quick start and usage
The core of the Self-hosted AI Starter Kit is a Docker Compose file, pre-configured with network and storage settings, minimizing the need for additional installations.
After completing the installation steps above, simply follow the steps below to get started.
1. Open <http://localhost:5678/> in your browser to set up n8n. You’ll only
have to do this once.
2. Open the included workflow:
<http://localhost:5678/workflow/srOnR8PAY3u4RSwb>
3. Click the **Chat** button at the bottom of the canvas, to start running the workflow.
4. If this is the first time you’re running the workflow, you may need to wait
until Ollama finishes downloading Llama3.2. You can inspect the docker
console logs to check on the progress.
To open n8n at any time, visit <http://localhost:5678/> in your browser.
With your n8n instance, you’ll have access to over 400 integrations and a
suite of basic and advanced AI nodes such as
[AI Agent](https://docs.n8n.io/integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.agent/),
[Text classifier](https://docs.n8n.io/integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.text-classifier/),
and [Information Extractor](https://docs.n8n.io/integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.information-extractor/)
nodes. To keep everything local, just remember to use the Ollama node for your
language model and Qdrant as your vector store.
> [!NOTE]
> This starter kit is designed to help you get started with self-hosted AI
> workflows. While it’s not fully optimized for production environments, it
> combines robust components that work well together for proof-of-concept
> projects. You can customize it to meet your specific needs
## Upgrading
* ### For Nvidia GPU setups:
```bash
docker compose --profile gpu-nvidia pull
docker compose create && docker compose --profile gpu-nvidia up
docker compose pull
docker compose create && docker compose up
docker compose --profile cpu pull
docker compose create && docker compose --profile cpu up
n8n is full of useful content for getting started quickly with its AI concepts and nodes. If you run into an issue, go to support.
- AI agents for developers: from theory to practice with n8n
- Tutorial: Build an AI workflow in n8n
- Langchain Concepts in n8n
- Demonstration of key differences between agents and chains
- What are vector databases?
For more AI workflow ideas, visit the official n8n AI template gallery. From each workflow, select the Use workflow button to automatically import the workflow into your local n8n instance.
- AI Agent Chat
- AI chat with any data source (using the n8n workflow too)
- Chat with OpenAI Assistant (by adding a memory)
- Use an open-source LLM (via Hugging Face)
- Chat with PDF docs using AI (quoting sources)
- AI agent that can scrape webpages
- Tax Code Assistant
- Breakdown Documents into Study Notes with MistralAI and Qdrant
- Financial Documents Assistant using Qdrant and Mistral.ai
- Recipe Recommendations with Qdrant and Mistral
The self-hosted AI starter kit will create a shared folder (by default,
located in the same directory) which is mounted to the n8n container and
allows n8n to access files on disk. This folder within the n8n container is
located at /data/shared
-- this is the path you’ll need to use in nodes that
interact with the local filesystem.
Nodes that interact with the local filesystem
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.