Skip to content

My custom n8n stack using various AIML technologies and third party integrations for automating workflows

License

Notifications You must be signed in to change notification settings

tooniez/n8n-ollama-agents

 
 

Repository files navigation

n8n Ollama Agents

n8n Ollama Agents a collection of my used credentials and workflows with Ollama.

See INFO.md for upstream details.

🧩 Components

  • n8n: Low-code automation platform
  • Ollama: Cross-platform LLM runner
  • Qdrant: Vector database for AI applications
  • PostgreSQL: Relational database for data storage
  • Redis: In-memory data structure store, used for caching and session management
  • Supabase: Open-source alternative to Firebase, used for real-time data sync and storage

🛠 Project Workflow

  1. Setup: The Docker Compose file initializes all necessary services.
  2. Data Ingestion: Use n8n workflows to load data into Qdrant or Supabase.
  3. AI Processing: Leverage Ollama for local LLM inference within n8n workflows.
  4. Workflow Creation: Build custom AI agents and RAG systems using n8n's visual editor.
  5. Integration: Connect your AI workflows with external services and APIs.
  6. Execution: Run your workflows on-demand or on a schedule within the self-hosted environment.

🚀 Connecting to localhost services

When integrating with other services running on your local machine (outside of the Docker network), use the special DNS name host.docker.internal instead of localhost. This allows containers to communicate with services on your host machine.

For example, if you have a service running on port 3000 on your local machine, you would access it from within a container using:

http://host.docker.internal

Included Workflows

  • Local RAG AI Agent
  • Supabase RAG AI Agent
  • Base RAG AI Agent
  • Demo Agent Workflow
  • Qdrant Vector Store Loader
  • Supabase Vector Store Loader
  • Flux Image Generator
  • Company Research Workflow
  • Appointment Booking Agent
  • LinkedIn Post Automation
  • Reddit Trend Analysis
  • Hacker News Insights
  • News Aggregator
  • Notion to LinkedIn Poster
  • Siri Ollama Agent

Getting Started

docker compose up -d

Backup and Restore

Workflow and Credential backups are stored in ./backups and can be restored using the n8n-restore container.

For AMD GPU users on Linux

git clone https://github.com/n8n-io/self-hosted-ai-starter-kit.git
cd self-hosted-ai-starter-kit
docker compose --profile gpu-amd up

For Mac / Apple Silicon users

Backup

docker compose up n8n-backup

Restore

For Mac users running OLLAMA locally

If you're running OLLAMA locally on your Mac (not in Docker), you need to modify the OLLAMA_HOST environment variable in the n8n service configuration. Update the x-n8n section in your Docker Compose file as follows:

x-n8n: &service-n8n
  # ... other configurations ...
  environment:
    # ... other environment variables ...
    - OLLAMA_HOST=host.docker.internal:11434

Additionally, after you see "Editor is now accessible via: http://localhost:5678/":

  1. Head to http://localhost:5678/home/credentials
  2. Click on "Local Ollama service"
  3. Change the base URL to "http://host.docker.internal:11434/"

For everyone else

docker compose up n8n-restore

git clone https://github.com/n8n-io/self-hosted-ai-starter-kit.git cd self-hosted-ai-starter-kit docker compose --profile cpu up


## ⚡️ Quick start and usage

The core of the Self-hosted AI Starter Kit is a Docker Compose file, pre-configured with network and storage settings, minimizing the need for additional installations.
After completing the installation steps above, simply follow the steps below to get started.

1. Open <http://localhost:5678/> in your browser to set up n8n. You’ll only
   have to do this once.
2. Open the included workflow:
   <http://localhost:5678/workflow/srOnR8PAY3u4RSwb>
3. Click the **Chat** button at the bottom of the canvas, to start running the workflow.
4. If this is the first time you’re running the workflow, you may need to wait
   until Ollama finishes downloading Llama3.2. You can inspect the docker
   console logs to check on the progress.

To open n8n at any time, visit <http://localhost:5678/> in your browser.

With your n8n instance, you’ll have access to over 400 integrations and a
suite of basic and advanced AI nodes such as
[AI Agent](https://docs.n8n.io/integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.agent/),
[Text classifier](https://docs.n8n.io/integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.text-classifier/),
and [Information Extractor](https://docs.n8n.io/integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.information-extractor/)
nodes. To keep everything local, just remember to use the Ollama node for your
language model and Qdrant as your vector store.

> [!NOTE]
> This starter kit is designed to help you get started with self-hosted AI
> workflows. While it’s not fully optimized for production environments, it
> combines robust components that work well together for proof-of-concept
> projects. You can customize it to meet your specific needs

## Upgrading

* ### For Nvidia GPU setups:

```bash
docker compose --profile gpu-nvidia pull
docker compose create && docker compose --profile gpu-nvidia up
  • For Mac / Apple Silicon users

docker compose pull
docker compose create && docker compose up
  • For Non-GPU setups:

docker compose --profile cpu pull
docker compose create && docker compose --profile cpu up

👓 Recommended reading

n8n is full of useful content for getting started quickly with its AI concepts and nodes. If you run into an issue, go to support.

🎥 Video walkthrough

🛍️ More AI templates

For more AI workflow ideas, visit the official n8n AI template gallery. From each workflow, select the Use workflow button to automatically import the workflow into your local n8n instance.

Learn AI key concepts

Local AI templates

Tips & tricks

Accessing local files

The self-hosted AI starter kit will create a shared folder (by default, located in the same directory) which is mounted to the n8n container and allows n8n to access files on disk. This folder within the n8n container is located at /data/shared -- this is the path you’ll need to use in nodes that interact with the local filesystem.

Nodes that interact with the local filesystem

📜 License

This project is licensed under the Apache License 2.0 - see the LICENSE file for details.

About

My custom n8n stack using various AIML technologies and third party integrations for automating workflows

Topics

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published