Skip to content

Samples and demonstrations of YugabyteDB

License

yugabyte/blog-examples

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 

Repository files navigation

blog-examples

Samples and demonstrations of YugabyteDB

Retrieval Augmented Generation

Location Content
ai_rag Docker / podman compose manifest for n8n and a two-node YugabyteDB universe and two sample n8n workflows

Retrieval Augmented Generation is a process through which unstructured data, such as email, transcripts, order forms, manuals etc. can be converted to vectors and made available to an AI agent. YugabyteDB supports vectors and semantic search, whereby an AI can find relevant information based on its meaning rather than lexical / text match, at massive scale.

This content helps provision Yugabyte and n8n containers in Docker or podman and provides two example workflows that implement RAG with n8n.

Prerequisites

  1. Docker or podman
  2. Ollama

Ollama should run locally and not in a container for best performance. Once installed download the embedding model with ollama pull mxbai-embed-large:latest.

Installation

  1. Deploy the required containers with podman-compose up --detach
  2. Visit http://localhost:5678 and sign in to n8n
  3. Select the drop-down next to Create Workflow and click Create Credential
  4. Select Ollama as the credential type and http://host.docker.internal:11434 as the URL
  5. Save the credential
  6. Create another credential for Postgres (to connect to YubgabyteDB)
    • Host should be rag-yugabytedb-1
    • User, database and password should be yugabyte
    • Port should be 5433 (for the first node)
  7. Create a final credential for Anthropic, or an alternative AI provider if needed
  8. Create a new workflow
  9. Click ... then Import from File...
  10. Import one of the examples (rag_demo.json or rag_intro.json) from the repository
  11. Double click any nodes with red crosses to set the credential (simply returing to the flow should do this)
  12. Click Execute now to populate the YugabyteDB database with vectors / embeddings based on the source material
  13. Use the chat to interact with an AI agent which can leverage the vector database for additional context

If required the embedding model and LLM nodes can be replaced with Ollama or another commercial AI service such as Google Gemini, OpenAI etc.

About

Samples and demonstrations of YugabyteDB

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published