Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .release-please-manifest.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{
".": "0.4.0-alpha.6"
".": "0.4.0-alpha.7"
}
6 changes: 3 additions & 3 deletions .stats.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
configured_endpoints: 89
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/llamastack%2Fllama-stack-client-af20fa1866f461e9fef4f7fd226d757b0dddee907e2a083fa582ac0580735e20.yml
openapi_spec_hash: 68caf264f8ade02c34456c526d7300b1
configured_endpoints: 96
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/llamastack%2Fllama-stack-client-ec32c74fc91569d09bb78aa3cdd8ebc65ed6c83bbd845fb79676e37c1711eda2.yml
openapi_spec_hash: 88f8449f767bd696985306c5dda5d026
config_hash: e8a35d9d37cb4774b4b0fe1b167dc156
8 changes: 8 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,13 @@
# Changelog

## 0.4.0-alpha.7 (2025-11-13)

Full Changelog: [v0.4.0-alpha.6...v0.4.0-alpha.7](https://github.com/llamastack/llama-stack-client-python/compare/v0.4.0-alpha.6...v0.4.0-alpha.7)

### Bug Fixes

* **api:** ensure openapi spec has deprecated routes ([44f2d4e](https://github.com/llamastack/llama-stack-client-python/commit/44f2d4e613a4f7ace0a874c4cd2cf18a28dabed4))

## 0.4.0-alpha.6 (2025-11-12)

Full Changelog: [v0.4.0-alpha.5...v0.4.0-alpha.6](https://github.com/llamastack/llama-stack-client-python/compare/v0.4.0-alpha.5...v0.4.0-alpha.6)
Expand Down
38 changes: 12 additions & 26 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,15 +33,10 @@ from llama_stack_client import LlamaStackClient

client = LlamaStackClient()

completion = client.chat.completions.create(
messages=[
{
"content": "string",
"role": "user",
}
],
model="model",
response = client.models.register(
model_id="model_id",
)
print(response.identifier)
```

While you can provide an `api_key` keyword argument, we recommend using [python-dotenv](https://pypi.org/project/python-dotenv/) to add `LLAMA_STACK_CLIENT_API_KEY="My API Key"` to your `.env` file so that your API Key is not stored in source control.
Expand Down Expand Up @@ -98,15 +93,10 @@ client = AsyncLlamaStackClient(


async def main() -> None:
completion = await client.chat.completions.create(
messages=[
{
"content": "string",
"role": "user",
}
],
model="model",
response = await client.models.register(
model_id="model_id",
)
print(response.identifier)


asyncio.run(main())
Expand Down Expand Up @@ -137,15 +127,10 @@ async def main() -> None:
async with AsyncLlamaStackClient(
http_client=DefaultAioHttpClient(),
) as client:
completion = await client.chat.completions.create(
messages=[
{
"content": "string",
"role": "user",
}
],
model="model",
response = await client.models.register(
model_id="model_id",
)
print(response.identifier)


asyncio.run(main())
Expand Down Expand Up @@ -213,10 +198,11 @@ from llama_stack_client import LlamaStackClient

client = LlamaStackClient()

tool_defs = client.tool_runtime.list_tools(
client.toolgroups.register(
provider_id="provider_id",
toolgroup_id="toolgroup_id",
mcp_endpoint={"uri": "uri"},
)
print(tool_defs.mcp_endpoint)
```

## File uploads
Expand Down
9 changes: 9 additions & 0 deletions api.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@
from llama_stack_client.types import (
InterleavedContent,
InterleavedContentItem,
ParamType,
SafetyViolation,
SamplingParams,
ScoringResult,
Expand All @@ -23,6 +24,8 @@ Methods:

- <code title="get /v1/toolgroups">client.toolgroups.<a href="./src/llama_stack_client/resources/toolgroups.py">list</a>() -> <a href="./src/llama_stack_client/types/toolgroup_list_response.py">ToolgroupListResponse</a></code>
- <code title="get /v1/toolgroups/{toolgroup_id}">client.toolgroups.<a href="./src/llama_stack_client/resources/toolgroups.py">get</a>(toolgroup_id) -> <a href="./src/llama_stack_client/types/tool_group.py">ToolGroup</a></code>
- <code title="post /v1/toolgroups">client.toolgroups.<a href="./src/llama_stack_client/resources/toolgroups.py">register</a>(\*\*<a href="src/llama_stack_client/types/toolgroup_register_params.py">params</a>) -> None</code>
- <code title="delete /v1/toolgroups/{toolgroup_id}">client.toolgroups.<a href="./src/llama_stack_client/resources/toolgroups.py">unregister</a>(toolgroup_id) -> None</code>

# Tools

Expand Down Expand Up @@ -285,13 +288,16 @@ from llama_stack_client.types import (
Model,
ModelRetrieveResponse,
ModelListResponse,
ModelRegisterResponse,
)
```

Methods:

- <code title="get /v1/models/{model_id}">client.models.<a href="./src/llama_stack_client/resources/models/models.py">retrieve</a>(model_id) -> <a href="./src/llama_stack_client/types/model_retrieve_response.py">ModelRetrieveResponse</a></code>
- <code title="get /v1/models">client.models.<a href="./src/llama_stack_client/resources/models/models.py">list</a>() -> <a href="./src/llama_stack_client/types/model_list_response.py">ModelListResponse</a></code>
- <code title="post /v1/models">client.models.<a href="./src/llama_stack_client/resources/models/models.py">register</a>(\*\*<a href="src/llama_stack_client/types/model_register_params.py">params</a>) -> <a href="./src/llama_stack_client/types/model_register_response.py">ModelRegisterResponse</a></code>
- <code title="delete /v1/models/{model_id}">client.models.<a href="./src/llama_stack_client/resources/models/models.py">unregister</a>(model_id) -> None</code>

## OpenAI

Expand Down Expand Up @@ -360,6 +366,8 @@ Methods:

- <code title="get /v1/shields/{identifier}">client.shields.<a href="./src/llama_stack_client/resources/shields.py">retrieve</a>(identifier) -> <a href="./src/llama_stack_client/types/shield.py">Shield</a></code>
- <code title="get /v1/shields">client.shields.<a href="./src/llama_stack_client/resources/shields.py">list</a>() -> <a href="./src/llama_stack_client/types/shield_list_response.py">ShieldListResponse</a></code>
- <code title="delete /v1/shields/{identifier}">client.shields.<a href="./src/llama_stack_client/resources/shields.py">delete</a>(identifier) -> None</code>
- <code title="post /v1/shields">client.shields.<a href="./src/llama_stack_client/resources/shields.py">register</a>(\*\*<a href="src/llama_stack_client/types/shield_register_params.py">params</a>) -> <a href="./src/llama_stack_client/types/shield.py">Shield</a></code>

# Scoring

Expand Down Expand Up @@ -391,6 +399,7 @@ Methods:

- <code title="get /v1/scoring-functions/{scoring_fn_id}">client.scoring_functions.<a href="./src/llama_stack_client/resources/scoring_functions.py">retrieve</a>(scoring_fn_id) -> <a href="./src/llama_stack_client/types/scoring_fn.py">ScoringFn</a></code>
- <code title="get /v1/scoring-functions">client.scoring_functions.<a href="./src/llama_stack_client/resources/scoring_functions.py">list</a>() -> <a href="./src/llama_stack_client/types/scoring_function_list_response.py">ScoringFunctionListResponse</a></code>
- <code title="post /v1/scoring-functions">client.scoring_functions.<a href="./src/llama_stack_client/resources/scoring_functions.py">register</a>(\*\*<a href="src/llama_stack_client/types/scoring_function_register_params.py">params</a>) -> None</code>

# Files

Expand Down
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[project]
name = "llama_stack_client"
version = "0.4.0-alpha.6"
version = "0.4.0-alpha.7"
description = "The official Python library for the llama-stack-client API"
dynamic = ["readme"]
license = "MIT"
Expand Down
Loading
Loading