-
Notifications
You must be signed in to change notification settings - Fork 54
LCORE-330: complete documents about OpenAI provider #465
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughREADME.md was reorganized to add a top-level lightspeed-stack structure, a new "Run LCS locally" guide, and expanded Configuration subsections (LLM Compatibility, Set LLM provider and model). Changes are documentation-only; no code or public API edits. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes Possibly related PRs
Suggested reviewers
Poem
✨ Finishing Touches🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (3)
README.md (3)
106-116: Fix typos, style, and path inconsistency in “Run LCS locally”.
- Typo: “shoud” → “should”.
- Style: “hands on” → “hands-on”; consistently use “Llama Stack”.
- Numbering: start from 1.
- Path: this section links to run.yaml at repo root, while later sections reference examples/run.yaml. Pick one (suggest: examples/run.yaml) for consistency.
Apply:
-# Run LCS locally - -To quickly get hands on LCS, we can run it using the default configurations provided in this repository: -0. install dependencies using [uv](https://docs.astral.sh/uv/getting-started/installation/) `uv sync --group dev --group llslibdev` -1. check Llama stack settings in [run.yaml](run.yaml), make sure we can access the provider and the model, the server shoud listen to port 8321. -2. export the LLM token env var that Llama stack requires. for OpenAI, we set the env var by `export OPENAI_API_KEY=sk-xxxxx` -3. start Llama stack server `uv run llama stack run run.yaml` -4. check the LCS settings in [lightspeed-stack.yaml](lightspeed-stack.yaml). `llama_stack.url` should be `url: http://localhost:8321` -5. start LCS server `make run` -6. access LCS web UI at [http://localhost:8080/](http://localhost:8080/) +# Run LCS locally + +To quickly get hands-on with LCS, use the default configurations in this repository: +1. Install dependencies using [uv](https://docs.astral.sh/uv/getting-started/installation/): `uv sync --group dev --group llslibdev` +2. Check Llama Stack settings in [run.yaml](examples/run.yaml); ensure the provider and model are accessible and the server listens on port 8321. +3. Export the LLM provider token environment variable required by Llama Stack (for OpenAI: `export OPENAI_API_KEY=sk-xxxxx`). +4. Start the Llama Stack server: `uv run llama stack run examples/run.yaml` +5. Verify LCS settings in [lightspeed-stack.yaml](lightspeed-stack.yaml); set `llama_stack.url: http://localhost:8321`. +6. Start the LCS server: `make run` +7. Access the LCS web UI at http://localhost:8080/
136-151: Unify run.yaml path reference.Here you link to examples/run.yaml; earlier “Run LCS locally” referenced run.yaml at repo root. Keep them consistent (suggest: examples/run.yaml). If both files exist, clarify their purposes.
154-160: Optional: include a GPT‑5 model example.Add a second snippet showing a gpt‑5 model entry to help users pick the latest default.
models: - model_id: gpt-4-turbo provider_id: openai model_type: llm provider_model_id: gpt-4-turbo + - model_id: gpt-5 + provider_id: openai + model_type: llm + provider_model_id: gpt-5
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
README.md(2 hunks)
🧰 Additional context used
🪛 LanguageTool
README.md
[grammar] ~110-~110: Ensure spelling is correct
Context: ... the provider and the model, the server shoud listen to port 8321. 2. export the LLM ...
(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)
[grammar] ~110-~110: There might be a mistake here.
Context: ...l, the server shoud listen to port 8321. 2. export the LLM token env var that Llama ...
(QB_NEW_EN)
[grammar] ~113-~113: There might be a mistake here.
Context: ...tack.yaml). llama_stack.url should be url: http://localhost:8321 5. start LCS server make run 6. access LC...
(QB_NEW_EN)
[grammar] ~114-~114: There might be a mistake here.
Context: ...p://localhost:83215. start LCS servermake run` 6. access LCS web UI at [http://localhost:8...
(QB_NEW_EN)
[grammar] ~124-~124: There might be a mistake here.
Context: ... | | -------- | ---------------------------...
(QB_NEW_EN)
[grammar] ~125-~125: There might be a mistake here.
Context: ...-------------------------------------- | | OpenAI | gpt-5, gpt-4o, gpt4-turbo, ...
(QB_NEW_EN)
[grammar] ~126-~126: There might be a mistake here.
Context: ...2 | | OpenAI | gpt-3.5-turbo, gpt-4 ...
(QB_NEW_EN)
🪛 markdownlint-cli2 (0.17.2)
README.md
131-131: Bare URL used
(MD034, no-bare-urls)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: build-pr
- GitHub Check: e2e_tests
🔇 Additional comments (1)
README.md (1)
18-67: TOC restructure looks good.Entries align with the new sections and improve discoverability.
| ## LLM Compatibility | ||
|
|
||
| Lightspeed Core Stack (LCS) supports the large language models from the providers listed below. | ||
|
|
||
| | Provider | Model | Tool Calling | provider_type | Example | | ||
| | -------- | ---------------------------------------------- | ------------ | -------------- | -------------------------------------------------------------------------- | | ||
| | OpenAI | gpt-5, gpt-4o, gpt4-turbo, gpt-4.1, o1, o3, o4 | Yes | remote::openai | [1](examples/openai-faiss-run.yaml) [2](examples/openai-pgvector-run.yaml) | | ||
| | OpenAI | gpt-3.5-turbo, gpt-4 | No | remote::openai | | | ||
|
|
||
| The "provider_type" is used in the llama stack configuration file when refering to the provider. | ||
|
|
||
| For details of OpenAI model capabilities, please refer to https://platform.openai.com/docs/models/compare | ||
|
|
||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct model names and link formatting in “LLM Compatibility”.
- Use canonical model IDs and current naming: “gpt-4-turbo” (not “gpt4-turbo”), and “o4-mini” (not “o4”). OpenAI announced GPT‑5 (Aug 7, 2025) and earlier “o3” and “o4‑mini” models; please reflect those names exactly. (openai.com)
- Convert the bare URL to a Markdown link to satisfy MD034.
Apply:
-| Provider | Model | Tool Calling | provider_type | Example |
-| -------- | ---------------------------------------------- | ------------ | -------------- | -------------------------------------------------------------------------- |
-| OpenAI | gpt-5, gpt-4o, gpt4-turbo, gpt-4.1, o1, o3, o4 | Yes | remote::openai | [1](examples/openai-faiss-run.yaml) [2](examples/openai-pgvector-run.yaml) |
-| OpenAI | gpt-3.5-turbo, gpt-4 | No | remote::openai | |
+| Provider | Model | Tool Calling | provider_type | Example |
+| -------- | ---------------------------------------------------- | ------------ | -------------- | -------------------------------------------------------------------------- |
+| OpenAI | gpt-5, gpt-4o, gpt-4-turbo, gpt-4.1, o1, o3, o4-mini | Yes | remote::openai | [1](examples/openai-faiss-run.yaml) [2](examples/openai-pgvector-run.yaml) |
+| OpenAI | gpt-3.5-turbo, gpt-4 | Yes | remote::openai | |
@@
-For details of OpenAI model capabilities, please refer to https://platform.openai.com/docs/models/compare
+For details of OpenAI model capabilities, please refer to the [OpenAI model comparison](https://platform.openai.com/docs/models/compare).Note: Tool/function calling has been supported across these series for years; set “Yes” accordingly. Also consider adding a short “Last reviewed:” date here, as model availability shifts quickly with GPT‑5 rollouts and retirements. (help.openai.com)
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ## LLM Compatibility | |
| Lightspeed Core Stack (LCS) supports the large language models from the providers listed below. | |
| | Provider | Model | Tool Calling | provider_type | Example | | |
| | -------- | ---------------------------------------------- | ------------ | -------------- | -------------------------------------------------------------------------- | | |
| | OpenAI | gpt-5, gpt-4o, gpt4-turbo, gpt-4.1, o1, o3, o4 | Yes | remote::openai | [1](examples/openai-faiss-run.yaml) [2](examples/openai-pgvector-run.yaml) | | |
| | OpenAI | gpt-3.5-turbo, gpt-4 | No | remote::openai | | | |
| The "provider_type" is used in the llama stack configuration file when refering to the provider. | |
| For details of OpenAI model capabilities, please refer to https://platform.openai.com/docs/models/compare | |
| ## LLM Compatibility | |
| Lightspeed Core Stack (LCS) supports the large language models from the providers listed below. | |
| | Provider | Model | Tool Calling | provider_type | Example | | |
| | -------- | ---------------------------------------------------- | ------------ | -------------- | -------------------------------------------------------------------------- | | |
| | OpenAI | gpt-5, gpt-4o, gpt-4-turbo, gpt-4.1, o1, o3, o4-mini | Yes | remote::openai | [1](examples/openai-faiss-run.yaml) [2](examples/openai-pgvector-run.yaml) | | |
| | OpenAI | gpt-3.5-turbo, gpt-4 | Yes | remote::openai | | | |
| For details of OpenAI model capabilities, please refer to the [OpenAI model comparison](https://platform.openai.com/docs/models/compare). |
🧰 Tools
🪛 LanguageTool
[grammar] ~124-~124: There might be a mistake here.
Context: ... | | -------- | ---------------------------...
(QB_NEW_EN)
[grammar] ~125-~125: There might be a mistake here.
Context: ...-------------------------------------- | | OpenAI | gpt-5, gpt-4o, gpt4-turbo, ...
(QB_NEW_EN)
[grammar] ~126-~126: There might be a mistake here.
Context: ...2 | | OpenAI | gpt-3.5-turbo, gpt-4 ...
(QB_NEW_EN)
🪛 markdownlint-cli2 (0.17.2)
131-131: Bare URL used
(MD034, no-bare-urls)
🤖 Prompt for AI Agents
In README.md around lines 120 to 133, update the LLM Compatibility table to use
canonical OpenAI model IDs and correct naming (e.g., replace "gpt4-turbo" with
"gpt-4-turbo", change "o4" to "o4-mini" or the exact current model IDs such as
"o4-mini" and "o3" as appropriate), set Tool Calling to "Yes" for the relevant
series, convert the bare OpenAI documentation URL into a proper Markdown link,
and add a short "Last reviewed:" date line to indicate when the table was
validated; ensure the provider_type and example links remain intact and reflect
the new model names.
tisnik
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the content looks ok, but the TOC will be regenerated into the original format, so it does not make much sense to reformat in in your PR. Do you want to merge it as is?
Signed-off-by: Haoyu Sun <hasun@redhat.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
README.md (1)
124-128: Correct model names, tool-calling column, and convert bare URL; add a review date.Use canonical model IDs, mark tool calling “Yes” where applicable, and avoid a bare URL. Add a “Last reviewed” note due to frequent model changes.
-| Provider | Model | Tool Calling | provider_type | Example | -| -------- | ---------------------------------------------- | ------------ | -------------- | -------------------------------------------------------------------------- | -| OpenAI | gpt-5, gpt-4o, gpt4-turbo, gpt-4.1, o1, o3, o4 | Yes | remote::openai | [1](examples/openai-faiss-run.yaml) [2](examples/openai-pgvector-run.yaml) | -| OpenAI | gpt-3.5-turbo, gpt-4 | No | remote::openai | | +| Provider | Model | Tool Calling | provider_type | Example | +| -------- | ---------------------------------------------------- | ------------ | -------------- | -------------------------------------------------------------------------- | +| OpenAI | gpt-5, gpt-4o, gpt-4-turbo, gpt-4.1, o1, o3, o4-mini | Yes | remote::openai | [1](examples/openai-faiss-run.yaml) [2](examples/openai-pgvector-run.yaml) | +| OpenAI | gpt-3.5-turbo, gpt-4 | Yes | remote::openai | | @@ -For details of OpenAI model capabilities, please refer to https://platform.openai.com/docs/models/compare +For details of OpenAI model capabilities, see the [OpenAI model comparison](https://platform.openai.com/docs/models/compare). + +Last reviewed: 2025-08-29.Also applies to: 131-133
🧹 Nitpick comments (5)
README.md (5)
18-19: Fix TOC indentation to satisfy MD007.Use two-space indents for nested list items in the TOC.
-* [lightspeed-stack](#lightspeed-stack) - * [About The Project](#about-the-project) +* [lightspeed-stack](#lightspeed-stack) + * [About The Project](#about-the-project) @@ -* [Configuration](#configuration) - * [LLM Compatibility](#llm-compatibility) - * [Set LLM provider and model](#set-llm-provider-and-model) +* [Configuration](#configuration) + * [LLM Compatibility](#llm-compatibility) + * [Set LLM provider and model](#set-llm-provider-and-model) @@ - * [Utility to generate documentation from source code](#utility-to-generate-documentation-from-source-code) + * [Utility to generate documentation from source code](#utility-to-generate-documentation-from-source-code)Also applies to: 25-26, 70-70
106-116: Polish “Run LCS locally”: fix typos, numbering, and path consistency.Correct spelling/casing, use consistent ordered list numbering, and point to the same run.yaml path used later.
-# Run LCS locally - -To quickly get hands on LCS, we can run it using the default configurations provided in this repository: -0. install dependencies using [uv](https://docs.astral.sh/uv/getting-started/installation/) `uv sync --group dev --group llslibdev` -1. check Llama stack settings in [run.yaml](run.yaml), make sure we can access the provider and the model, the server shoud listen to port 8321. -2. export the LLM token env var that Llama stack requires. for OpenAI, we set the env var by `export OPENAI_API_KEY=sk-xxxxx` -3. start Llama stack server `uv run llama stack run run.yaml` -4. check the LCS settings in [lightspeed-stack.yaml](lightspeed-stack.yaml). `llama_stack.url` should be `url: http://localhost:8321` -5. start LCS server `make run` -6. access LCS web UI at [http://localhost:8080/](http://localhost:8080/) +# Run LCS locally + +To quickly try LCS, use the default configurations in this repository: +1. Install dependencies with [uv](https://docs.astral.sh/uv/getting-started/installation/): `uv sync --group dev --group llslibdev` +1. Check Llama Stack settings in [run.yaml](examples/run.yaml); ensure the provider and model are reachable and the server should listen on port 8321. +1. Export the LLM token environment variable required by Llama Stack. For OpenAI: `export OPENAI_API_KEY=sk-xxxxx` +1. Start Llama Stack: `uv run llama stack run examples/run.yaml` +1. Check the LCS settings in [lightspeed-stack.yaml](lightspeed-stack.yaml); set `llama_stack.url: http://localhost:8321` +1. Start the LCS server: `make run` +1. Access the LCS web UI at http://localhost:8080/Would you confirm whether the canonical example path should be
examples/run.yamleverywhere? If not, we can flip both references to the desired location.
129-129: Typo: “refering” → “referring”.-The "provider_type" is used in the llama stack configuration file when refering to the provider. +The "provider_type" is used in the Llama Stack configuration file when referring to the provider.
138-141: Grammar/style: articles and naming consistency.-The LLM providers are set in the section `providers.inference`. This example adds a inference provider "openai" to the llama stack. To use environment variables as configuration values, we can use the syntax `${env.ENV_VAR_NAME}`. +The LLM providers are set in the `providers.inference` section. This example adds an inference provider `openai` to Llama Stack. To use environment variables as configuration values, use the syntax `${env.ENV_VAR_NAME}`.
142-150: Add a security note about secrets and clarify base URL usage.Remind readers not to commit API keys; note that a custom base URL is only needed when using a proxy/self-hosted endpoint.
api_key: ${env.OPENAI_API_KEY} url: ${env.SERVICE_URL}
+> [!IMPORTANT]
+> Do not commit API keys to the repository; use environment variables or a secret manager. If your provider uses the default public endpoint, you can omiturl.</blockquote></details> </blockquote></details> <details> <summary>📜 Review details</summary> **Configuration used**: CodeRabbit UI **Review profile**: CHILL **Plan**: Pro **💡 Knowledge Base configuration:** - MCP integration is disabled by default for public repositories - Jira integration is disabled by default for public repositories - Linear integration is disabled by default for public repositories You can enable these sources in your CodeRabbit configuration. <details> <summary>📥 Commits</summary> Reviewing files that changed from the base of the PR and between 6a4d07667e4e52e9f37cd861ac03af0a10b0f184 and afc03cce63966c47d615733237533319ea9f1aba. </details> <details> <summary>📒 Files selected for processing (1)</summary> * `README.md` (3 hunks) </details> <details> <summary>🧰 Additional context used</summary> <details> <summary>🪛 markdownlint-cli2 (0.17.2)</summary> <details> <summary>README.md</summary> 19-19: Unordered list indentation Expected: 2; Actual: 4 (MD007, ul-indent) --- 25-25: Unordered list indentation Expected: 2; Actual: 4 (MD007, ul-indent) --- 26-26: Unordered list indentation Expected: 2; Actual: 4 (MD007, ul-indent) --- 70-70: Unordered list indentation Expected: 2; Actual: 4 (MD007, ul-indent) --- 131-131: Bare URL used (MD034, no-bare-urls) </details> </details> <details> <summary>🪛 LanguageTool</summary> <details> <summary>README.md</summary> [grammar] ~110-~110: Ensure spelling is correct Context: ... the provider and the model, the server shoud listen to port 8321. 2. export the LLM ... (QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1) --- [grammar] ~110-~110: There might be a mistake here. Context: ...l, the server shoud listen to port 8321. 2. export the LLM token env var that Llama ... (QB_NEW_EN) --- [grammar] ~111-~111: There might be a mistake here. Context: ...t 8321. 2. export the LLM token env var that Llama stack requires. for OpenAI, we se... (QB_NEW_EN) --- [grammar] ~113-~113: There might be a mistake here. Context: ...tack.yaml). `llama_stack.url` should be `url: http://localhost:8321` 5. start LCS server `make run` 6. access LC... (QB_NEW_EN) --- [grammar] ~114-~114: There might be a mistake here. Context: ...p://localhost:8321` 5. start LCS server `make run` 6. access LCS web UI at [http://localhost:8... (QB_NEW_EN) --- [grammar] ~124-~124: There might be a mistake here. Context: ... | | -------- | ---------------------------... (QB_NEW_EN) --- [grammar] ~125-~125: There might be a mistake here. Context: ...-------------------------------------- | | OpenAI | gpt-5, gpt-4o, gpt4-turbo, ... (QB_NEW_EN) --- [grammar] ~126-~126: There might be a mistake here. Context: ...[2](examples/openai-pgvector-run.yaml) | | OpenAI | gpt-3.5-turbo, gpt-4 ... (QB_NEW_EN) </details> </details> </details> <details> <summary>🔇 Additional comments (1)</summary><blockquote> <details> <summary>README.md (1)</summary><blockquote> `154-159`: **Models snippet looks good.** The example correctly demonstrates `model_id` vs `provider_model_id` and associates the model with the declared provider. </blockquote></details> </blockquote></details> </details> <!-- This is an auto-generated comment by CodeRabbit for review status -->
tisnik
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Description
Just add documentation about OpenAI provider. Other requirements are already met:
Type of change
Related Tickets & Documents
Checklist before requesting a review
Testing
Summary by CodeRabbit