Skip to content

Conversation

@tisnik
Copy link
Contributor

@tisnik tisnik commented Sep 3, 2025

Description

LCORE-580: Updated documentation that was outdated

Type of change

  • Refactor
  • New feature
  • Bug fix
  • CVE fix
  • Optimization
  • Documentation Update
  • Configuration Update
  • Bump-up service version
  • Bump-up dependent library
  • Bump-up library or tool used for development (does not change the final image)
  • CI configuration change
  • Konflux configuration change
  • Unit tests improvement
  • Integration tests improvement
  • End to end tests improvement

Related Tickets & Documents

  • Related Issue #LCORE-580

Summary by CodeRabbit

  • Documentation
    • Updated dependency version references to llama-stack v0.2.18 across README, Getting Started, Deployment Guide, and example project files.
    • Adjusted installation and example output snippets to reflect the new version.
    • No functional or API changes; edits are documentation-only.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Sep 3, 2025

Walkthrough

Updated llama-stack version references to 0.2.18 across documentation and example project metadata (README.md, docs/*.md, and examples/pyproject.llamastack.toml). No code, exported symbols, or runtime logic were modified.

Changes

Cohort / File(s) Summary
Docs: Version bump references
README.md, docs/deployment_guide.md, docs/getting_started.md
Updated llama-stack version strings to 0.2.18 in README and documentation examples (installation snippet, JSON output blocks, and sample pyproject sections). No behavioral changes.
Examples: Project metadata
examples/pyproject.llamastack.toml
Bumped dependency llama-stack from 0.2.170.2.18 under [project].dependencies. No code changes.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Possibly related PRs

Poem

A rabbit hops through lines so neat,
changing carrots one small feat.
0.2.18 takes its place,
no code disturbed, just tidy space.
I stamp my paw — docs updated, sweet! 🥕🐇

✨ Finishing Touches
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore or @coderabbit ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
README.md (1)

110-112: Fix user-facing typos and minor grammar.

These are visible in docs; please correct for clarity.

-1. check Llama stack settings in [run.yaml](run.yaml), make sure we can access the provider and the model, the server shoud listen to port 8321.
+1. check Llama stack settings in [run.yaml](run.yaml), make sure we can access the provider and the model; the server should listen on port 8321.
- The "provider_type" is used in the llama stack configuration file when refering to the provider.
+ The "provider_type" is used in the llama stack configuration file when referring to the provider.
-The Llama Stack can be run as a standalone server and accessed via its the REST
+The Llama Stack can be run as a standalone server and accessed via its REST
-   This step is not neccessary.
+   This step is not necessary.
-Development images are build from main branch every time a new pull request is merged.
+Development images are built from the main branch every time a new pull request is merged.
-For macosx users:
+For macOS users:
-     + trl==0.20.0
+     + trl==0.20.0

Also applies to: 129-131, 164-169, 433-433, 590-595, 664-666, 141-141

docs/deployment_guide.md (2)

143-147: Tidy up broken/misleading commands and typos.

These will trip users following the guide.

-1. Copy the project file named `pyproject.llamastack.toml` into the new directory, renaming it to `pyproject.toml':
+1. Copy the project file named `pyproject.llamastack.toml` into the new directory, renaming it to `pyproject.toml`:
-cp examples/lightspeed-stack-lls-external.yaml lightspeed-stack.yaml`
+cp examples/lightspeed-stack-lls-external.yaml lightspeed-stack.yaml
-Llama Stack can be used as a library that is already part of OLS image. It means that no other processed needs to be started,
+Llama Stack can be used as a library that is already part of the OLS image. It means that no other processes need to be started,
-Development images are build from main branch every time a new pull request is merged.
+Development images are built from the main branch every time a new pull request is merged.

Also applies to: 403-404, 1124-1126, 599-606


1-1: Update all version pins to 0.2.18

  • examples/pyproject.llamastack.toml:7 – bump llama-stack==0.2.170.2.18
  • docs/getting_started.md:27 – bump llama-stack==0.2.170.2.18
  • Verify whether the "version": "0.2.0" entries in docs/deployment_guide.md:1326 and openapi.json:16 should remain as the API schema version or be updated to 0.2.18.
🧹 Nitpick comments (3)
docs/deployment_guide.md (3)

679-697: Dependency pin updated to llama-stack==0.2.18 — OK.

Consider centralizing this version (e.g., a single source of truth via docs variables) to avoid future drift across snippets.


1268-1269: Container run command — env var expansion portability.

${OPENAI_API_KEY} relies on the caller’s shell. Consider adding a brief note “ensure OPENAI_API_KEY is exported in your shell” above the command for clarity.


1440-1442: Minor wording nit.

“endlines” → “newlines”.

-> As is shown on the previous example, the output might contain endlines, Markdown marks etc.
+> As shown in the previous example, the output might contain newlines, Markdown markup, etc.
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 2212d7a and 1b2995b.

📒 Files selected for processing (2)
  • README.md (1 hunks)
  • docs/deployment_guide.md (3 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: build-pr
  • GitHub Check: e2e_tests
🔇 Additional comments (2)
docs/deployment_guide.md (2)

393-395: Version JSON updated to 0.2.18 — OK.


1074-1076: Second version JSON updated to 0.2.18 — OK.

authors = []
dependencies = [
"llama-stack==0.2.14",
"llama-stack==0.2.18",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Version bump to llama-stack 0.2.18 looks correct.

Matches the PR goal and is consistent with other updated snippets.

Run to ensure no stale references remain:


🏁 Script executed:

#!/bin/bash
# Find any leftover 0.2.14 references or mismatched llama-stack pins
rg -nP '\b0\.2\.14\b|llama-stack==[0-9]+\.[0-9]+\.[0-9]+' -g '!**/dist/**' -g '!**/.venv/**' -S

# Optional: list all version JSON examples to confirm they read 0.2.18
rg -nP '"version"\s*:\s*"0\.2\.\d+"' docs README.md -S

Length of output: 629


Fix stale llama-stack version references

  • examples/pyproject.llamastack.toml: update llama-stack==0.2.17 (line 7) to 0.2.18.
  • docs/getting_started.md: update llama-stack==0.2.17 (line 27) to 0.2.18.
🤖 Prompt for AI Agents
In README.md around line 247, update the stale llama-stack reference by ensuring
consistency across the repo: change any occurrences of "llama-stack==0.2.17" to
"llama-stack==0.2.18" — specifically edit examples/pyproject.llamastack.toml
(line 7) and docs/getting_started.md (line 27) to replace 0.2.17 with 0.2.18 so
all files match the README.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
docs/getting_started.md (1)

26-34: Consider reducing future doc churn for this pin.

If compatible, either: (a) use a lower-bound (>=0.2.18) in docs, or (b) centralize the version string (e.g., a single docs include or Makefile var) to avoid drift across files.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 1b2995b and 8bc6e47.

📒 Files selected for processing (4)
  • README.md (1 hunks)
  • docs/deployment_guide.md (3 hunks)
  • docs/getting_started.md (1 hunks)
  • examples/pyproject.llamastack.toml (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • examples/pyproject.llamastack.toml
🚧 Files skipped from review as they are similar to previous changes (2)
  • docs/deployment_guide.md
  • README.md
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: build-pr
  • GitHub Check: e2e_tests

```bash
uv add \
"llama-stack==0.2.17" \
"llama-stack==0.2.18" \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Version bump to llama-stack==0.2.18: LGTM; verify repo-wide consistency.

Looks good. Please also scan for any lingering 0.2.14–0.2.17 references in docs/examples and confirm 0.2.18 is on PyPI.

Run:


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "Current PyPI version:"
curl -s https://pypi.org/pypi/llama-stack/json | jq -r '.info.version'

echo
echo "Stale version references (expect none):"
rg -nP --glob '!**/dist/**' --glob '!**/build/**' 'llama-stack==0\.2\.(1[0-7]|0?\d)' || true

echo
echo "All pinned occurrences (should mostly be 0.2.18):"
rg -n 'llama-stack=='

Length of output: 1003


Bump llama-stack dependency to 0.2.20 across docs and examples.

PyPI’s latest llama-stack release is 0.2.20—update every llama-stack==0.2.18 occurrence (README.md:247, pyproject.toml:31, examples/pyproject.llamastack.toml:7, docs/getting_started.md:27, docs/deployment_guide.md:679) to ==0.2.20 and verify no pins to 0.2.14–0.2.17 remain.

🤖 Prompt for AI Agents
In docs/getting_started.md around line 27, the llama-stack dependency is pinned
to "llama-stack==0.2.18"; update this occurrence to "llama-stack==0.2.20" and
save the file. After making this change, search the repo for any remaining pins
to 0.2.14–0.2.18 and replace them with 0.2.20 (specifically check README.md line
~247, pyproject.toml line ~31, examples/pyproject.llamastack.toml line ~7, and
docs/deployment_guide.md line ~679) to ensure all references are consistent.

@tisnik tisnik merged commit f5a90a0 into lightspeed-core:main Sep 3, 2025
19 checks passed
@coderabbitai coderabbitai bot mentioned this pull request Sep 10, 2025
15 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant