Skip to content

Conversation

@bsatapat-jpg
Copy link
Contributor

@bsatapat-jpg bsatapat-jpg commented Sep 23, 2025

Description

Type of change

  • Refactor
  • New feature
  • Bug fix
  • CVE fix
  • Optimization
  • Documentation Update
  • Configuration Update
  • Bump-up service version
  • Bump-up dependent library
  • Bump-up library or tool used for development (does not change the final image)
  • CI configuration change
  • Konflux configuration change
  • Unit tests improvement
  • Integration tests improvement
  • End to end tests improvement

Related Tickets & Documents

  • Related Issue #
  • Closes #

Checklist before requesting a review

  • I have performed a self-review of my code.
  • PR has passed all pre-merge test jobs.
  • If it is a core feature, I have added thorough tests.

Testing

  • Please provide detailed steps to perform tests related to this code change.
  • How were the fix/results from this change verified? Please provide relevant screenshots or results.

Summary by CodeRabbit

  • Documentation
    • Adds a comprehensive BYOK (Bring Your Own Knowledge) guide.
    • Covers overview, how it works, prerequisites, and workflow.
    • Provides step-by-step setup: prepare knowledge sources, create a vector DB, configure embeddings and Llama Stack, and enable RAG tools.
    • Lists supported vector DBs (FAISS, pgvector) with YAML/config examples.
    • Includes sample configs for OpenAI+FAISS and vLLM+pgvector, plus references.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Sep 23, 2025

Walkthrough

Adds a new documentation file docs/byok_guide.md describing BYOK (Bring Your Own Knowledge): overview, prerequisites, ingestion/indexing workflow, supported vector DBs (FAISS, pgvector), and YAML/config examples for OpenAI+FAISS and vLLM+pgvector. No runtime or API changes.

Changes

Cohort / File(s) Summary
Documentation (BYOK Guide)
docs/byok_guide.md
New guide covering BYOK concepts, prerequisites, ingestion/indexing workflow, YAML/config samples (run.yaml), examples for OpenAI+FAISS and vLLM+pgvector, vector DB setup, and enabling RAG tools. No code changes.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant U as User
  participant A as App
  participant LS as Llama Stack
  participant RAG as RAG Tools
  participant VDB as Vector DB (FAISS/pgvector)
  participant SRC as Knowledge Sources

  rect rgb(235,245,255)
    Note over SRC,VDB: One-time setup: ingest & index
    A->>SRC: Collect/prepare documents
    A->>RAG: Embed documents
    RAG->>VDB: Upsert vectors + metadata
  end

  rect rgb(245,255,235)
    Note over U,LS: Query time
    U->>A: Ask question
    A->>LS: Submit request with RAG enabled
    LS->>RAG: Request relevant context
    RAG->>VDB: Vector search (kNN)
    VDB-->>RAG: Return top-k vectors
    RAG-->>LS: Provide context snippets
    LS-->>A: Generate answer with citations
    A-->>U: Respond
  end
Loading

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~2 minutes

Poem

A rabbit hops across the guide,
I stash the docs and open wide.
Vectors hum and RAGs align,
Knowledge blooms in every line.
Hop—BYOK!—the garden's fine. 🐇✨

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The title “Added BYOK documentation” accurately and concisely summarizes the primary change in the pull request by indicating that new documentation for the BYOK feature has been added. It is clear, specific, and directly reflects the content of the changeset.
Docstring Coverage ✅ Passed No functions found in the changes. Docstring coverage check skipped.
✨ Finishing touches
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

📜 Recent review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e2c4ace and 0028b2c.

📒 Files selected for processing (1)
  • docs/byok_guide.md (1 hunks)
🧰 Additional context used
🪛 markdownlint-cli2 (0.18.1)
docs/byok_guide.md

66-66: Bare URL used

(MD034, no-bare-urls)


94-94: Bare URL used

(MD034, no-bare-urls)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: e2e_tests
🔇 Additional comments (3)
docs/byok_guide.md (3)

185-229: Fix provider list indentation so run.yaml parses.

This was flagged earlier but the current snippet still has the sequence items aligned with their parent keys, which breaks YAML parsing. The - provider_id entries under each providers subsection must be indented beneath their respective keys.

 providers:
   inference:
-  # Embedding model provider
-  - provider_id: sentence-transformers
-    provider_type: inline::sentence-transformers
-    config: {}
-
-  # LLM provider (example: OpenAI)
-  - provider_id: openai
-    provider_type: remote::openai
-    config:
-      api_key: ${env.OPENAI_API_KEY}
+    # Embedding model provider
+    - provider_id: sentence-transformers
+      provider_type: inline::sentence-transformers
+      config: {}
+
+    # LLM provider (example: OpenAI)
+    - provider_id: openai
+      provider_type: remote::openai
+      config:
+        api_key: ${env.OPENAI_API_KEY}
 
   agents:
-  - provider_id: meta-reference
-    provider_type: inline::meta-reference
-    config:
-      persistence_store:
-        type: sqlite
-        db_path: .llama/distributions/ollama/agents_store.db
-      responses_store:
-        type: sqlite
-        db_path: .llama/distributions/ollama/responses_store.db
+    - provider_id: meta-reference
+      provider_type: inline::meta-reference
+      config:
+        persistence_store:
+          type: sqlite
+          db_path: .llama/distributions/ollama/agents_store.db
+        responses_store:
+          type: sqlite
+          db_path: .llama/distributions/ollama/responses_store.db
 
   safety:
-  - provider_id: llama-guard
-    provider_type: inline::llama-guard
-    config:
-      excluded_categories: []
+    - provider_id: llama-guard
+      provider_type: inline::llama-guard
+      config:
+        excluded_categories: []
 
   # Vector database configuration
   vector_io:
-  - provider_id: your-knowledge-base
-    provider_type: inline::faiss  # or remote::pgvector
-    config:
-      kvstore:
-        type: sqlite
-        db_path: /path/to/vector_db/faiss_store.db
-        namespace: null
+    - provider_id: your-knowledge-base
+      provider_type: inline::faiss  # or remote::pgvector
+      config:
+        kvstore:
+          type: sqlite
+          db_path: /path/to/vector_db/faiss_store.db
+          namespace: null
 
   tool_runtime:
-  - provider_id: rag-runtime
-    provider_type: inline::rag-runtime
-    config: {}
+    - provider_id: rag-runtime
+      provider_type: inline::rag-runtime
+      config: {}

330-370: Example 1 providers block still mis-indented.

Same indentation problem persists here; the list items must be nested under each providers subsection or the example configuration will throw YAML errors.

 providers:
   inference:
-  - provider_id: sentence-transformers
-    provider_type: inline::sentence-transformers
-    config: {}
-  - provider_id: openai
-    provider_type: remote::openai
-    config:
-      api_key: ${env.OPENAI_API_KEY}
+    - provider_id: sentence-transformers
+      provider_type: inline::sentence-transformers
+      config: {}
+    - provider_id: openai
+      provider_type: remote::openai
+      config:
+        api_key: ${env.OPENAI_API_KEY}
 
   agents:
-  - provider_id: meta-reference
-    provider_type: inline::meta-reference
-    config:
-      persistence_store:
-        type: sqlite
-        db_path: .llama/distributions/ollama/agents_store.db
-      responses_store:
-        type: sqlite
-        db_path: .llama/distributions/ollama/responses_store.db
+    - provider_id: meta-reference
+      provider_type: inline::meta-reference
+      config:
+        persistence_store:
+          type: sqlite
+          db_path: .llama/distributions/ollama/agents_store.db
+        responses_store:
+          type: sqlite
+          db_path: .llama/distributions/ollama/responses_store.db
 
   safety:
-  - provider_id: llama-guard
-    provider_type: inline::llama-guard
-    config:
-      excluded_categories: []
+    - provider_id: llama-guard
+      provider_type: inline::llama-guard
+      config:
+        excluded_categories: []
 
   vector_io:
-  - provider_id: company-docs
-    provider_type: inline::faiss
-    config:
-      kvstore:
-        type: sqlite
-        db_path: /home/user/vector_dbs/company_docs/faiss_store.db
-        namespace: null
+    - provider_id: company-docs
+      provider_type: inline::faiss
+      config:
+        kvstore:
+          type: sqlite
+          db_path: /home/user/vector_dbs/company_docs/faiss_store.db
+          namespace: null
 
   tool_runtime:
-  - provider_id: rag-runtime
-    provider_type: inline::rag-runtime
-    config: {}
+    - provider_id: rag-runtime
+      provider_type: inline::rag-runtime
+      config: {}

409-455: Example 2 providers block has the same YAML indentation bug.

The list entries need to be indented under their parent keys; otherwise users copying this example will get parsing failures. Please fix the indentation (and feel free to keep the api_token note if you want, but the structure must be valid).

 providers:
   inference:
-  - provider_id: sentence-transformers
-    provider_type: inline::sentence-transformers
-    config: {}
-  - provider_id: vllm
-    provider_type: remote::vllm
-    config:
-      url: http://localhost:8000/v1/
-      api_token: your-token-here
+    - provider_id: sentence-transformers
+      provider_type: inline::sentence-transformers
+      config: {}
+    - provider_id: vllm
+      provider_type: remote::vllm
+      config:
+        url: http://localhost:8000/v1/
+        api_token: your-token-here
 
   agents:
-  - provider_id: meta-reference
-    provider_type: inline::meta-reference
-    config:
-      persistence_store:
-        type: sqlite
-        db_path: .llama/distributions/ollama/agents_store.db
-      responses_store:
-        type: sqlite
-        db_path: .llama/distributions/ollama/responses_store.db
+    - provider_id: meta-reference
+      provider_type: inline::meta-reference
+      config:
+        persistence_store:
+          type: sqlite
+          db_path: .llama/distributions/ollama/agents_store.db
+        responses_store:
+          type: sqlite
+          db_path: .llama/distributions/ollama/responses_store.db
 
   safety:
-  - provider_id: llama-guard
-    provider_type: inline::llama-guard
-    config:
-      excluded_categories: []
+    - provider_id: llama-guard
+      provider_type: inline::llama-guard
+      config:
+        excluded_categories: []
 
   vector_io:
-  - provider_id: enterprise-knowledge
-    provider_type: remote::pgvector
-    config:
-      host: postgres.company.com
-      port: 5432
-      db: enterprise_kb
-      user: rag_user
-      password: ${env.POSTGRES_PASSWORD}
-      kvstore:
-        type: sqlite
-        db_path: .llama/distributions/pgvector/registry.db
+    - provider_id: enterprise-knowledge
+      provider_type: remote::pgvector
+      config:
+        host: postgres.company.com
+        port: 5432
+        db: enterprise_kb
+        user: rag_user
+        password: ${env.POSTGRES_PASSWORD}
+        kvstore:
+          type: sqlite
+          db_path: .llama/distributions/pgvector/registry.db
 
   tool_runtime:
-  - provider_id: rag-runtime
-    provider_type: inline::rag-runtime
-    config: {}
+    - provider_id: rag-runtime
+      provider_type: inline::rag-runtime
+      config: {}

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (6)
docs/byok_guide.md (6)

66-68: Convert bare URL to a Markdown link (MD034).

Improves lint compliance and readability.

-  - Repository: https://github.com/lightspeed-core/rag-content
+  - Repository: https://github.com/lightspeed-core/rag-content

Optional (clearer):

-  - Repository: https://github.com/lightspeed-core/rag-content
+  - Repository: [lightspeed-core/rag-content](https://github.com/lightspeed-core/rag-content)

92-93: Grammar + bare URL fix (MD034).

-Please refer https://github.com/lightspeed-core/rag-content to create your vector database
+Please refer to the [rag-content repository](https://github.com/lightspeed-core/rag-content) to create your vector database.

96-103: Format the “Supported formats” list and standardize naming.

Use bullet list and consistent “Llama Stack” spelling.

-- Supported formats: 
-  You can generate the vector database either using:
-    Llama-Index Faiss Vector Store
-    Llama-Index Postgres (PGVector) Vector Store
-    Llama-Stack Faiss Vector-IO
-    Llama-Stack SQLite-vec Vector-IO
+- Supported formats:
+  You can generate the vector database using:
+  - LlamaIndex FAISS Vector Store
+  - LlamaIndex Postgres (pgvector) Vector Store
+  - Llama Stack FAISS Vector I/O
+  - Llama Stack SQLite-vec Vector I/O

70-73: Clarify Llama Stack requirement.

Current phrasing is ambiguous.

-- **Llama Stack**: Compatible vector database backend
+- **Llama Stack**: Installed and configured with the vector_io API enabled

106-113: Call out PDM prerequisite or provide a plain Python alternative.

Avoid friction for readers who don’t use PDM.

-Download and configure your embedding model:
-Use the embedding generation step mentioned in the rag-content repo.
-For example:
+Download and configure your embedding model:
+Use the embedding generation step mentioned in the rag-content repo.
+Note: The command below uses PDM; ensure PDM is installed. Alternatively, run the script with plain Python.
 For example:
 ```bash
 mkdir ./embeddings_model
-pdm run python ./scripts/download_embeddings_model.py -l ./embeddings_model/ -r sentence-transformers/all-mpnet-base-v2 
+# With PDM:
+pdm run python ./scripts/download_embeddings_model.py -l ./embeddings_model/ -r sentence-transformers/all-mpnet-base-v2
+# Or with Python directly:
+python ./scripts/download_embeddings_model.py -l ./embeddings_model/ -r sentence-transformers/all-mpnet-base-v2

---

`375-378`: **Avoid plaintext tokens in examples; prefer env var placeholders.**

You already use env vars elsewhere. Apply:


```diff
-      api_token: your-token-here
+      api_token: ${env.VLLM_API_TOKEN}
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3dba25f and 7d3f675.

📒 Files selected for processing (1)
  • docs/byok_guide.md (1 hunks)
🧰 Additional context used
🪛 markdownlint-cli2 (0.18.1)
docs/byok_guide.md

66-66: Bare URL used

(MD034, no-bare-urls)


92-92: Bare URL used

(MD034, no-bare-urls)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: e2e_tests
🔇 Additional comments (3)
docs/byok_guide.md (3)

1-6: Clear overview; good scoping and audience fit.

Intro succinctly frames BYOK, contexts, and outcomes. No issues.


45-56: Confirm Mermaid rendering support in your docs toolchain.

If Mermaid isn’t enabled, the diagram will render as a code block.


434-436: Link valid — docs/rag_guide.md exists
Verified: file present at docs/rag_guide.md; link is not broken.

Copy link
Contributor

@are-ces are-ces left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey I have added some comments, open to discussion!

I would remove pgvector from supported vector dbs, as at it is not yet supported in LCS. See here.

Good work :)

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
docs/byok_guide.md (1)

66-94: Convert bare URLs to Markdown links.
Markdownlint is flagging the raw URLs; turning them into links also reads better.

Consider:

-- **rag-content tool**: For creating compatible vector databases
-  - Repository: https://github.com/lightspeed-core/rag-content
+- **rag-content tool**: For creating compatible vector databases
+  - Repository: [lightspeed-core/rag-content](https://github.com/lightspeed-core/rag-content)
 ...
-Please refer https://github.com/lightspeed-core/rag-content to create your vector database
+Refer to the [rag-content repository](https://github.com/lightspeed-core/rag-content) to create your vector database.
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7d3f675 and e2c4ace.

📒 Files selected for processing (1)
  • docs/byok_guide.md (1 hunks)
🧰 Additional context used
🪛 markdownlint-cli2 (0.18.1)
docs/byok_guide.md

66-66: Bare URL used

(MD034, no-bare-urls)


94-94: Bare URL used

(MD034, no-bare-urls)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: e2e_tests
🔇 Additional comments (4)
docs/byok_guide.md (4)

185-229: Fix providers block indentation (Step 4 example).
Without indenting the list items under each provider group, this YAML won’t parse—the dashes are being treated as siblings of inference, agents, etc. Please indent the comments and list entries two spaces deeper so readers can copy/paste a valid config.

Apply:

 providers:
   inference:
-  # Embedding model provider
-  - provider_id: sentence-transformers
-    provider_type: inline::sentence-transformers
-    config: {}
-
-  # LLM provider (example: OpenAI)
-  - provider_id: openai
-    provider_type: remote::openai
-    config:
-      api_key: ${env.OPENAI_API_KEY}
+    # Embedding model provider
+    - provider_id: sentence-transformers
+      provider_type: inline::sentence-transformers
+      config: {}
+
+    # LLM provider (example: OpenAI)
+    - provider_id: openai
+      provider_type: remote::openai
+      config:
+        api_key: ${env.OPENAI_API_KEY}
 
   agents:
-  - provider_id: meta-reference
-    provider_type: inline::meta-reference
-    config:
-      persistence_store:
-        type: sqlite
-        db_path: .llama/distributions/ollama/agents_store.db
-      responses_store:
-        type: sqlite
-        db_path: .llama/distributions/ollama/responses_store.db
+    - provider_id: meta-reference
+      provider_type: inline::meta-reference
+      config:
+        persistence_store:
+          type: sqlite
+          db_path: .llama/distributions/ollama/agents_store.db
+        responses_store:
+          type: sqlite
+          db_path: .llama/distributions/ollama/responses_store.db
 
   safety:
-  - provider_id: llama-guard
-    provider_type: inline::llama-guard
-    config:
-      excluded_categories: []
+    - provider_id: llama-guard
+      provider_type: inline::llama-guard
+      config:
+        excluded_categories: []
 
   # Vector database configuration
   vector_io:
-  - provider_id: your-knowledge-base
-    provider_type: inline::faiss  # or remote::pgvector
-    config:
-      kvstore:
-        type: sqlite
-        db_path: /path/to/vector_db/faiss_store.db
-        namespace: null
+    - provider_id: your-knowledge-base
+      provider_type: inline::faiss  # or remote::pgvector
+      config:
+        kvstore:
+          type: sqlite
+          db_path: /path/to/vector_db/faiss_store.db
+          namespace: null
 
   tool_runtime:
-  - provider_id: rag-runtime
-    provider_type: inline::rag-runtime
-    config: {}
+    - provider_id: rag-runtime
+      provider_type: inline::rag-runtime
+      config: {}

415-417: Avoid instructing users to hard-code API tokens.
Embedding your-token-here in config encourages storing credentials in plain text. Please show an environment-variable pattern instead so the guidance is secure by default.

Update:

-        url: http://localhost:8000/v1/
-        api_token: your-token-here
+        url: http://localhost:8000/v1/
+        api_token: ${env.VLLM_API_TOKEN}

328-378: Indent providers block in Example 1.
Same parsing problem as above—indent the list entries under each provider group so this sample stays valid when copied.

Apply:

 providers:
   inference:
-  - provider_id: sentence-transformers
-    provider_type: inline::sentence-transformers
-    config: {}
-  - provider_id: openai
-    provider_type: remote::openai
-    config:
-      api_key: ${env.OPENAI_API_KEY}
+    - provider_id: sentence-transformers
+      provider_type: inline::sentence-transformers
+      config: {}
+    - provider_id: openai
+      provider_type: remote::openai
+      config:
+        api_key: ${env.OPENAI_API_KEY}
 
   agents:
-  - provider_id: meta-reference
-    provider_type: inline::meta-reference
-    config:
-      persistence_store:
-        type: sqlite
-        db_path: .llama/distributions/ollama/agents_store.db
-      responses_store:
-        type: sqlite
-        db_path: .llama/distributions/ollama/responses_store.db
+    - provider_id: meta-reference
+      provider_type: inline::meta-reference
+      config:
+        persistence_store:
+          type: sqlite
+          db_path: .llama/distributions/ollama/agents_store.db
+        responses_store:
+          type: sqlite
+          db_path: .llama/distributions/ollama/responses_store.db
 
   safety:
-  - provider_id: llama-guard
-    provider_type: inline::llama-guard
-    config:
-      excluded_categories: []
+    - provider_id: llama-guard
+      provider_type: inline::llama-guard
+      config:
+        excluded_categories: []
 
   vector_io:
-  - provider_id: company-docs
-    provider_type: inline::faiss
-    config:
-      kvstore:
-        type: sqlite
-        db_path: /home/user/vector_dbs/company_docs/faiss_store.db
-        namespace: null
+    - provider_id: company-docs
+      provider_type: inline::faiss
+      config:
+        kvstore:
+          type: sqlite
+          db_path: /home/user/vector_dbs/company_docs/faiss_store.db
+          namespace: null
 
   tool_runtime:
-  - provider_id: rag-runtime
-    provider_type: inline::rag-runtime
-    config: {}
+    - provider_id: rag-runtime
+      provider_type: inline::rag-runtime
+      config: {}

407-456: Indent providers block in Example 2.
Same YAML structure issue—indent the list entries so each top-level group owns its sequence.

Use:

 providers:
   inference:
-  - provider_id: sentence-transformers
-    provider_type: inline::sentence-transformers
-    config: {}
-  - provider_id: vllm
-    provider_type: remote::vllm
-    config:
-      url: http://localhost:8000/v1/
-      api_token: your-token-here
+    - provider_id: sentence-transformers
+      provider_type: inline::sentence-transformers
+      config: {}
+    - provider_id: vllm
+      provider_type: remote::vllm
+      config:
+        url: http://localhost:8000/v1/
+        api_token: your-token-here
 
   agents:
-  - provider_id: meta-reference
-    provider_type: inline::meta-reference
-    config:
-      persistence_store:
-        type: sqlite
-        db_path: .llama/distributions/ollama/agents_store.db
-      responses_store:
-        type: sqlite
-        db_path: .llama/distributions/ollama/responses_store.db
+    - provider_id: meta-reference
+      provider_type: inline::meta-reference
+      config:
+        persistence_store:
+          type: sqlite
+          db_path: .llama/distributions/ollama/agents_store.db
+        responses_store:
+          type: sqlite
+          db_path: .llama/distributions/ollama/responses_store.db
 
   safety:
-  - provider_id: llama-guard
-    provider_type: inline::llama-guard
-    config:
-      excluded_categories: []
+    - provider_id: llama-guard
+      provider_type: inline::llama-guard
+      config:
+        excluded_categories: []
 
   vector_io:
-  - provider_id: enterprise-knowledge
-    provider_type: remote::pgvector
-    config:
-      host: postgres.company.com
-      port: 5432
-      db: enterprise_kb
-      user: rag_user
-      password: ${env.POSTGRES_PASSWORD}
-      kvstore:
-        type: sqlite
-        db_path: .llama/distributions/pgvector/registry.db
+    - provider_id: enterprise-knowledge
+      provider_type: remote::pgvector
+      config:
+        host: postgres.company.com
+        port: 5432
+        db: enterprise_kb
+        user: rag_user
+        password: ${env.POSTGRES_PASSWORD}
+        kvstore:
+          type: sqlite
+          db_path: .llama/distributions/pgvector/registry.db
 
   tool_runtime:
-  - provider_id: rag-runtime
-    provider_type: inline::rag-runtime
-    config: {}
+    - provider_id: rag-runtime
+      provider_type: inline::rag-runtime
+      config: {}

@are-ces
Copy link
Contributor

are-ces commented Sep 25, 2025

Thank you @bsatapat-jpg for addressing my comments. Nit: I realized that it would be better to clearly state that vector_db_id must be the same as the id provided during creation using the rag-tool, I see that you wrote it as a comment in the yaml file but I feel it could go unnoticed.

Otherwise LGTM.

@bsatapat-jpg
Copy link
Contributor Author

Thanks @are-ces. Updated the review comments by adding it as Important under Step 4

Copy link
Contributor

@are-ces are-ces left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@tisnik tisnik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@tisnik tisnik merged commit e5bddff into lightspeed-core:main Sep 25, 2025
18 of 19 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants