LCORE-870: Add AWS Bedrock inference provider support#1449
Conversation
- Updated documentation - Added AWS Bedrock e2e test - Patched other providers e2e test (missing mcp)
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
WalkthroughAdds AWS Bedrock as a supported inference provider: updates CI matrix and secrets, exposes Bedrock credential vars in Docker Compose, documents Bedrock in README/docs, and introduces example and e2e runtime configs for Bedrock-based setups plus a small test config tweak. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes 🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
✨ Simplify code
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 5
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@examples/bedrock-run.yaml`:
- Around line 54-57: The YAML currently contains a placeholder secret
openai_api_key: '********' which is not runnable; update the provider config
(provider_id/provider_type config block) to load the Braintrust key from an
environment variable instead of a literal placeholder (e.g., reference a
BRAINTRUST/OpenAI API env var in the config for openai_api_key) so the
scoring-provider can initialize at runtime; ensure the env var name is
documented or matches your runtime secrets naming convention.
- Around line 137-157: The default_embedding_model (provider_id:
sentence-transformers, model_id: nomic-ai/nomic-embed-text-v1.5) referenced
under vector_stores must be registered in the models list so embeddings can
resolve; add a models entry (e.g., model_id: nomic-embed-text-v1.5 or matching
identifier) with provider_id: sentence-transformers, model_type: embedding (or
llm if your schema requires model_type) and provider_model_id set to
nomic-ai/nomic-embed-text-v1.5 so that default_embedding_model points to a
registered resource (update the existing models array near the top that contains
custom-bedrock-model).
In `@README.md`:
- Line 216: Update the compatibility table entry that currently lists the
Bedrock model as deepseek.v3-v1 to use the exact tested model identifier
deepseek.v3-v1:0 so it matches the runnable configs (see
examples/bedrock-run.yaml and tests/e2e/configs/run-bedrock.yaml which use
deepseek.v3-v1:0); change the table cell value to deepseek.v3-v1:0 to avoid
copy/paste misconfiguration.
In `@tests/e2e/configs/run-bedrock.yaml`:
- Around line 54-57: The E2E config contains a hardcoded placeholder credential
for openai_api_key; replace the literal '********' under the provider config
(provider_id/provider_type/config/openai_api_key) with an environment-backed
reference so tests remain runnable in CI (e.g., read from OPENAI_API_KEY or the
project's secret/template mechanism), and update any test harness or docs to
require that environment variable.
- Around line 137-157: The default_embedding_model referenced by vector_stores
(provider_id: sentence-transformers, model_id: nomic-ai/nomic-embed-text-v1.5)
is not registered in registered_resources.models; add a matching model
registration entry to registered_resources.models with the same provider_id and
model_id (and set model_type to an embedding type if your schema uses one) so
the RAG toolgroup (toolgroup_id: builtin::rag) can resolve embeddings at
runtime.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 30e53654-7079-452e-b5b6-a413864b57ce
📒 Files selected for processing (8)
.github/workflows/e2e_tests_providers.yamlREADME.mddocker-compose-library.yamldocker-compose.yamldocs/providers.mdexamples/bedrock-run.yamltests/e2e/configs/run-bedrock.yamltests/e2e/configs/run-rhelai.yaml
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: build-pr
- GitHub Check: E2E: server mode / ci
- GitHub Check: E2E Tests for Lightspeed Evaluation job
- GitHub Check: E2E: library mode / ci
- GitHub Check: Konflux kflux-prd-rh02 / lightspeed-stack-on-pull-request
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-12-18T10:21:09.038Z
Learnt from: are-ces
Repo: lightspeed-core/lightspeed-stack PR: 935
File: run.yaml:114-115
Timestamp: 2025-12-18T10:21:09.038Z
Learning: In Llama Stack version 0.3.x, telemetry provider configuration is not supported under the `providers` section in run.yaml configuration files. Telemetry can be enabled with just `telemetry.enabled: true` without requiring an explicit provider block.
Applied to files:
examples/bedrock-run.yamltests/e2e/configs/run-bedrock.yaml
🔇 Additional comments (5)
tests/e2e/configs/run-rhelai.yaml (1)
57-59: Good addition for MCP tool runtime coverage.Adding
model-context-protocolhere keeps this E2E config aligned with MCP-enabled test scenarios.docs/providers.md (1)
40-40: Bedrock support table update looks consistent.The row now reflects actual project support state introduced in this PR.
.github/workflows/e2e_tests_providers.yaml (1)
16-16: Matrix + secret wiring for Bedrock is correctly plumbed.The Bedrock environment is added to test coverage, and the token is propagated in both deployment modes.
Also applies to: 200-200, 232-232
docker-compose.yaml (1)
67-68: Bedrock token env injection in server-mode compose is correct.This enables Bedrock credential propagation into the
llama-stackservice.docker-compose-library.yaml (1)
73-74: Library-mode compose Bedrock env wiring looks good.This keeps credential propagation consistent across both deployment modes.
| - provider_id: braintrust | ||
| provider_type: inline::braintrust | ||
| config: | ||
| openai_api_key: '********' |
There was a problem hiding this comment.
Replace placeholder Braintrust key with env-based configuration.
openai_api_key: '********' is not runnable configuration and may fail scoring-provider initialization/use.
🔐 Proposed fix
- provider_id: braintrust
provider_type: inline::braintrust
config:
- openai_api_key: '********'
+ openai_api_key: ${env.OPENAI_API_KEY}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@examples/bedrock-run.yaml` around lines 54 - 57, The YAML currently contains
a placeholder secret openai_api_key: '********' which is not runnable; update
the provider config (provider_id/provider_type config block) to load the
Braintrust key from an environment variable instead of a literal placeholder
(e.g., reference a BRAINTRUST/OpenAI API env var in the config for
openai_api_key) so the scoring-provider can initialize at runtime; ensure the
env var name is documented or matches your runtime secrets naming convention.
| | Azure | gpt-5-chat, gpt-4.1, gpt-4.1-mini, gpt-4.1-nano, o1-mini | No or limited | remote::azure | | | ||
| | VertexAI | google/gemini-2.0-flash, google/gemini-2.5-flash, google/gemini-2.5-pro [^1] | Yes | remote::vertexai | [1](examples/vertexai-run.yaml) | | ||
| | WatsonX | meta-llama/llama-3-3-70b-instruct | Yes | remote::watsonx | [1](examples/watsonx-run.yaml) | | ||
| | AWS Bedrock | deepseek.v3-v1 | Yes | remote::bedrock | [1](examples/bedrock-run.yaml) | |
There was a problem hiding this comment.
Use the exact tested Bedrock model identifier in compatibility table.
Line 216 lists deepseek.v3-v1, but the actual runnable configs use deepseek.v3-v1:0 (examples/bedrock-run.yaml Line 141 and tests/e2e/configs/run-bedrock.yaml Line 141). Aligning this avoids copy/paste misconfiguration.
📝 Suggested doc fix
-| AWS Bedrock | deepseek.v3-v1 | Yes | remote::bedrock | [1](examples/bedrock-run.yaml) |
+| AWS Bedrock | deepseek.v3-v1:0 | Yes | remote::bedrock | [1](examples/bedrock-run.yaml) |📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| | AWS Bedrock | deepseek.v3-v1 | Yes | remote::bedrock | [1](examples/bedrock-run.yaml) | | |
| | AWS Bedrock | deepseek.v3-v1:0 | Yes | remote::bedrock | [1](examples/bedrock-run.yaml) | |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@README.md` at line 216, Update the compatibility table entry that currently
lists the Bedrock model as deepseek.v3-v1 to use the exact tested model
identifier deepseek.v3-v1:0 so it matches the runnable configs (see
examples/bedrock-run.yaml and tests/e2e/configs/run-bedrock.yaml which use
deepseek.v3-v1:0); change the table cell value to deepseek.v3-v1:0 to avoid
copy/paste misconfiguration.
| - provider_id: braintrust | ||
| provider_type: inline::braintrust | ||
| config: | ||
| openai_api_key: '********' |
There was a problem hiding this comment.
Avoid non-runnable placeholder credentials in E2E config.
openai_api_key: '********' should be environment-backed to keep this config executable and deterministic in CI.
🔐 Proposed fix
- provider_id: braintrust
provider_type: inline::braintrust
config:
- openai_api_key: '********'
+ openai_api_key: ${env.OPENAI_API_KEY}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - provider_id: braintrust | |
| provider_type: inline::braintrust | |
| config: | |
| openai_api_key: '********' | |
| - provider_id: braintrust | |
| provider_type: inline::braintrust | |
| config: | |
| openai_api_key: ${env.OPENAI_API_KEY} |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/e2e/configs/run-bedrock.yaml` around lines 54 - 57, The E2E config
contains a hardcoded placeholder credential for openai_api_key; replace the
literal '********' under the provider config
(provider_id/provider_type/config/openai_api_key) with an environment-backed
reference so tests remain runnable in CI (e.g., read from OPENAI_API_KEY or the
project's secret/template mechanism), and update any test harness or docs to
require that environment variable.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
.github/workflows/e2e_tests_providers.yaml (1)
200-232:⚠️ Potential issue | 🟡 MinorFail fast if Bedrock credentials are missing.
Line 200 and Line 232 pass
AWS_BEARER_TOKEN_BEDROCK, but there’s no upfront validation step (unlike VertexAI). Add a guard to avoid late test failures with less actionable logs.🔧 Proposed fix
+ - name: Validate Bedrock credentials + if: matrix.environment == 'bedrock' + env: + AWS_BEARER_TOKEN_BEDROCK: ${{ secrets.AWS_BEARER_TOKEN_BEDROCK }} + run: | + if [ -z "$AWS_BEARER_TOKEN_BEDROCK" ]; then + echo "❌ AWS_BEARER_TOKEN_BEDROCK is not set. Configure it in repository secrets." + exit 1 + fi + echo "✅ Bedrock credential is configured" + - name: Run services (Server Mode)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/e2e_tests_providers.yaml around lines 200 - 232, Add an explicit guard that fails fast when the AWS_BEARER_TOKEN_BEDROCK secret is missing: in the workflow step that sets AWS_BEARER_TOKEN_BEDROCK (the "Run services (Library Mode)" / preceding run: block where docker compose is started), detect if the OPENAI_API_KEY and AWS_BEARER_TOKEN_BEDROCK env vars are empty and exit with a clear error if AWS_BEARER_TOKEN_BEDROCK is not provided; update the run block that contains docker compose up -d to perform this check before starting services so tests don’t proceed with missing Bedrock credentials.
♻️ Duplicate comments (2)
examples/bedrock-run.yaml (1)
54-57:⚠️ Potential issue | 🟡 MinorReplace placeholder Braintrust key with an environment variable.
Line 57 hardcodes
'********', which is not executable configuration.🔐 Proposed fix
- provider_id: braintrust provider_type: inline::braintrust config: - openai_api_key: '********' + openai_api_key: ${env.OPENAI_API_KEY}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/bedrock-run.yaml` around lines 54 - 57, Replace the hardcoded placeholder openai_api_key value in the provider config block (provider_id/provider_type/config -> openai_api_key) with an environment variable reference (e.g., use a YAML env substitution or template such as ${BRAINTRUST_API_KEY}) so real credentials are not committed; update any README or runtime docs to instruct setting BRAINTRUST_API_KEY in the environment before running.tests/e2e/configs/run-bedrock.yaml (1)
54-57:⚠️ Potential issue | 🟡 MinorReplace placeholder Braintrust key with env-backed configuration.
Line 57 uses a literal placeholder (
'********'), which makes this config non-runnable in CI/runtime.🔐 Proposed fix
- provider_id: braintrust provider_type: inline::braintrust config: - openai_api_key: '********' + openai_api_key: ${env.OPENAI_API_KEY}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/e2e/configs/run-bedrock.yaml` around lines 54 - 57, The config currently hardcodes openai_api_key as a literal placeholder ('********'); update the provider config (provider_id/provider_type block) so openai_api_key is read from an environment-backed value instead of a literal—replace the placeholder with an env variable reference used by our test harness (e.g., a ${BRAINTRUST_API_KEY}-style reference or equivalent in your YAML loader) and ensure any test setup/export uses that BRAINTRUST_API_KEY env var so the tests can run in CI/runtime.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@examples/bedrock-run.yaml`:
- Around line 1-163: The example and e2e Bedrock YAMLs have diverged;
consolidate them by extracting the shared manifest into a single template and
updating both consumers to render from that template (e.g., create a canonical
template for the top-level keys like version, providers, registered_resources,
vector_stores and have examples/bedrock-run and the e2e run-bedrock config
generated from it), or add a CI diff check that compares the generated output
against the committed files and fails if they differ; implement template
generation (or CI check) and update any build/test scripts that reference the
current static files to use the new template/render step so the configs stay in
sync.
---
Outside diff comments:
In @.github/workflows/e2e_tests_providers.yaml:
- Around line 200-232: Add an explicit guard that fails fast when the
AWS_BEARER_TOKEN_BEDROCK secret is missing: in the workflow step that sets
AWS_BEARER_TOKEN_BEDROCK (the "Run services (Library Mode)" / preceding run:
block where docker compose is started), detect if the OPENAI_API_KEY and
AWS_BEARER_TOKEN_BEDROCK env vars are empty and exit with a clear error if
AWS_BEARER_TOKEN_BEDROCK is not provided; update the run block that contains
docker compose up -d to perform this check before starting services so tests
don’t proceed with missing Bedrock credentials.
---
Duplicate comments:
In `@examples/bedrock-run.yaml`:
- Around line 54-57: Replace the hardcoded placeholder openai_api_key value in
the provider config block (provider_id/provider_type/config -> openai_api_key)
with an environment variable reference (e.g., use a YAML env substitution or
template such as ${BRAINTRUST_API_KEY}) so real credentials are not committed;
update any README or runtime docs to instruct setting BRAINTRUST_API_KEY in the
environment before running.
In `@tests/e2e/configs/run-bedrock.yaml`:
- Around line 54-57: The config currently hardcodes openai_api_key as a literal
placeholder ('********'); update the provider config (provider_id/provider_type
block) so openai_api_key is read from an environment-backed value instead of a
literal—replace the placeholder with an env variable reference used by our test
harness (e.g., a ${BRAINTRUST_API_KEY}-style reference or equivalent in your
YAML loader) and ensure any test setup/export uses that BRAINTRUST_API_KEY env
var so the tests can run in CI/runtime.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 1a37577b-e11e-4312-9938-bc6f74c58185
📒 Files selected for processing (3)
.github/workflows/e2e_tests_providers.yamlexamples/bedrock-run.yamltests/e2e/configs/run-bedrock.yaml
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: E2E: library mode / ci
- GitHub Check: build-pr
- GitHub Check: E2E: server mode / ci
- GitHub Check: E2E Tests for Lightspeed Evaluation job
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-12-18T10:21:09.038Z
Learnt from: are-ces
Repo: lightspeed-core/lightspeed-stack PR: 935
File: run.yaml:114-115
Timestamp: 2025-12-18T10:21:09.038Z
Learning: In Llama Stack version 0.3.x, telemetry provider configuration is not supported under the `providers` section in run.yaml configuration files. Telemetry can be enabled with just `telemetry.enabled: true` without requiring an explicit provider block.
Applied to files:
tests/e2e/configs/run-bedrock.yamlexamples/bedrock-run.yaml
🔇 Additional comments (2)
.github/workflows/e2e_tests_providers.yaml (1)
16-16: Bedrock matrix wiring looks correct.
bedrockis correctly added to the environment matrix and aligns with the existingrun-${environment}.yamlselection flow.tests/e2e/configs/run-bedrock.yaml (1)
142-147: Embedding model mapping is now internally consistent.
vector_stores.default_embedding_modelpoints toall-mpnet-base-v2, and that model is registered underregistered_resources.models.Also applies to: 161-163
| version: 2 | ||
|
|
||
| apis: | ||
| - agents | ||
| - batches | ||
| - datasetio | ||
| - eval | ||
| - files | ||
| - inference | ||
| - safety | ||
| - scoring | ||
| - tool_runtime | ||
| - vector_io | ||
|
|
||
| benchmarks: [] | ||
| datasets: [] | ||
| image_name: starter | ||
| # external_providers_dir: /opt/app-root/src/.llama/providers.d | ||
|
|
||
| providers: | ||
| inference: | ||
| - config: | ||
| api_key: ${env.AWS_BEARER_TOKEN_BEDROCK} | ||
| region_name: us-east-2 | ||
| provider_id: aws-bedrock | ||
| provider_type: remote::bedrock | ||
| - provider_id: openai | ||
| provider_type: remote::openai | ||
| config: | ||
| api_key: ${env.OPENAI_API_KEY} | ||
| - config: {} | ||
| provider_id: sentence-transformers | ||
| provider_type: inline::sentence-transformers | ||
| files: | ||
| - config: | ||
| metadata_store: | ||
| table_name: files_metadata | ||
| backend: sql_default | ||
| storage_dir: ${env.SQLITE_STORE_DIR:=~/.llama/storage/files} | ||
| provider_id: meta-reference-files | ||
| provider_type: inline::localfs | ||
| safety: | ||
| - config: | ||
| excluded_categories: [] | ||
| provider_id: llama-guard | ||
| provider_type: inline::llama-guard | ||
| scoring: | ||
| - provider_id: basic | ||
| provider_type: inline::basic | ||
| config: {} | ||
| - provider_id: llm-as-judge | ||
| provider_type: inline::llm-as-judge | ||
| config: {} | ||
| - provider_id: braintrust | ||
| provider_type: inline::braintrust | ||
| config: | ||
| openai_api_key: '********' | ||
| tool_runtime: | ||
| - config: {} # Enable the RAG tool | ||
| provider_id: rag-runtime | ||
| provider_type: inline::rag-runtime | ||
| - config: {} # Enable MCP (Model Context Protocol) support | ||
| provider_id: model-context-protocol | ||
| provider_type: remote::model-context-protocol | ||
| vector_io: | ||
| - config: # Define the storage backend for RAG | ||
| persistence: | ||
| namespace: vector_io::faiss | ||
| backend: kv_default | ||
| provider_id: faiss | ||
| provider_type: inline::faiss | ||
| agents: | ||
| - config: | ||
| persistence: | ||
| agent_state: | ||
| namespace: agents_state | ||
| backend: kv_default | ||
| responses: | ||
| table_name: agents_responses | ||
| backend: sql_default | ||
| provider_id: meta-reference | ||
| provider_type: inline::meta-reference | ||
| batches: | ||
| - config: | ||
| kvstore: | ||
| namespace: batches_store | ||
| backend: kv_default | ||
| provider_id: reference | ||
| provider_type: inline::reference | ||
| datasetio: | ||
| - config: | ||
| kvstore: | ||
| namespace: huggingface_datasetio | ||
| backend: kv_default | ||
| provider_id: huggingface | ||
| provider_type: remote::huggingface | ||
| - config: | ||
| kvstore: | ||
| namespace: localfs_datasetio | ||
| backend: kv_default | ||
| provider_id: localfs | ||
| provider_type: inline::localfs | ||
| eval: | ||
| - config: | ||
| kvstore: | ||
| namespace: eval_store | ||
| backend: kv_default | ||
| provider_id: meta-reference | ||
| provider_type: inline::meta-reference | ||
| scoring_fns: [] | ||
| server: | ||
| port: 8321 | ||
| storage: | ||
| backends: | ||
| kv_default: | ||
| type: kv_sqlite | ||
| db_path: ${env.KV_STORE_PATH:=~/.llama/storage/kv_store.db} | ||
| sql_default: | ||
| type: sql_sqlite | ||
| db_path: ${env.SQL_STORE_PATH:=~/.llama/storage/sql_store.db} | ||
| stores: | ||
| metadata: | ||
| namespace: registry | ||
| backend: kv_default | ||
| inference: | ||
| table_name: inference_store | ||
| backend: sql_default | ||
| max_write_queue_size: 10000 | ||
| num_writers: 4 | ||
| conversations: | ||
| table_name: openai_conversations | ||
| backend: sql_default | ||
| prompts: | ||
| namespace: prompts | ||
| backend: kv_default | ||
| registered_resources: | ||
| models: | ||
| - model_id: custom-bedrock-model | ||
| model_type: llm | ||
| provider_id: aws-bedrock | ||
| provider_model_id: deepseek.v3-v1:0 | ||
| - model_id: all-mpnet-base-v2 | ||
| model_type: embedding | ||
| provider_id: sentence-transformers | ||
| provider_model_id: all-mpnet-base-v2 | ||
| metadata: | ||
| embedding_dimension: 768 | ||
| shields: | ||
| - shield_id: llama-guard | ||
| provider_id: llama-guard | ||
| provider_shield_id: openai/gpt-4o-mini | ||
| vector_stores: [] | ||
| datasets: [] | ||
| scoring_fns: [] | ||
| benchmarks: [] | ||
| tool_groups: | ||
| - toolgroup_id: builtin::rag # Register the RAG tool | ||
| provider_id: rag-runtime | ||
| vector_stores: | ||
| default_provider_id: faiss | ||
| default_embedding_model: | ||
| provider_id: sentence-transformers | ||
| model_id: all-mpnet-base-v2 |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Reduce drift risk between example and e2e Bedrock configs.
This file is nearly identical to tests/e2e/configs/run-bedrock.yaml; consider generating both from one template (or adding a CI diff check) to prevent silent divergence.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@examples/bedrock-run.yaml` around lines 1 - 163, The example and e2e Bedrock
YAMLs have diverged; consolidate them by extracting the shared manifest into a
single template and updating both consumers to render from that template (e.g.,
create a canonical template for the top-level keys like version, providers,
registered_resources, vector_stores and have examples/bedrock-run and the e2e
run-bedrock config generated from it), or add a CI diff check that compares the
generated output against the committed files and fails if they differ; implement
template generation (or CI check) and update any build/test scripts that
reference the current static files to use the new template/render step so the
configs stay in sync.
PR Reviewer Guide 🔍Here are some key observations to aid the review process:
|
|
/retest |
Description
Add AWS Bedrock as a fully supported inference provider with documentation, example configurations, and e2e test coverage.
Type of change
Tools used to create PR
Related Tickets & Documents
Checklist before requesting a review
Testing
tests/e2e/configs/run-bedrock.yamlconfiguration.github/workflows/e2e_tests_providers.yaml)deepseek.v3-v1model (tool calling supported)examples/bedrock-run.yamlTest results available here.
Summary by CodeRabbit
New Features
Documentation
Tests
Chores