Skip to content
This repository was archived by the owner on Aug 14, 2025. It is now read-only.
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .release-please-manifest.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{
".": "0.2.17"
".": "0.2.18-alpha.1"
}
6 changes: 3 additions & 3 deletions .stats.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
configured_endpoints: 106
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/llamastack%2Fllama-stack-client-f59f1c7d33001d60b5190f68aa49eacec90f05dbe694620b8916152c3922051d.yml
openapi_spec_hash: 804edd2e834493906dc430145402be3b
config_hash: de16e52db65de71ac35adcdb665a74f5
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/llamastack%2Fllama-stack-client-c371abef4463f174f8d35ef3da4697fae5eb221db615f9c305319196472f313b.yml
openapi_spec_hash: d9bb62faf229c2c2875c732715e9cfd1
config_hash: e67fd054e95c1e82f78f4b834e96bb65
18 changes: 18 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,23 @@
# Changelog

## 0.2.18-alpha.1 (2025-08-12)

Full Changelog: [v0.2.17...v0.2.18-alpha.1](https://github.com/llamastack/llama-stack-client-python/compare/v0.2.17...v0.2.18-alpha.1)

### Features

* **api:** update via SDK Studio ([8afae6c](https://github.com/llamastack/llama-stack-client-python/commit/8afae6c1e1a4614cc59db7ae511440693e0479a6))
* **api:** update via SDK Studio ([143a973](https://github.com/llamastack/llama-stack-client-python/commit/143a973ea9ff81da1d93c421af8c85dbd171ef3c))
* **api:** update via SDK Studio ([b8e32bb](https://github.com/llamastack/llama-stack-client-python/commit/b8e32bbbf68f8a75c956079119c6b65d7ac165e5))
* **api:** update via SDK Studio ([1a2c77d](https://github.com/llamastack/llama-stack-client-python/commit/1a2c77df732eb9d0c031e0ff7558176fbf754ad8))
* **api:** update via SDK Studio ([d66fb5f](https://github.com/llamastack/llama-stack-client-python/commit/d66fb5fe89acb66a55066d82b849bbf4d402db99))


### Chores

* **internal:** update comment in script ([8d599cd](https://github.com/llamastack/llama-stack-client-python/commit/8d599cd47f98f704f89c9bd979a55cc334895107))
* update @stainless-api/prism-cli to v5.15.0 ([5f8ae94](https://github.com/llamastack/llama-stack-client-python/commit/5f8ae94955bb3403c0abe89f2999c2d49af97b07))

## 0.2.17 (2025-08-06)

Full Changelog: [v0.2.15...v0.2.17](https://github.com/llamastack/llama-stack-client-python/compare/v0.2.15...v0.2.17)
Expand Down
11 changes: 5 additions & 6 deletions api.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ Methods:

- <code title="post /v1/openai/v1/responses">client.responses.<a href="./src/llama_stack_client/resources/responses/responses.py">create</a>(\*\*<a href="src/llama_stack_client/types/response_create_params.py">params</a>) -> <a href="./src/llama_stack_client/types/response_object.py">ResponseObject</a></code>
- <code title="get /v1/openai/v1/responses/{response_id}">client.responses.<a href="./src/llama_stack_client/resources/responses/responses.py">retrieve</a>(response_id) -> <a href="./src/llama_stack_client/types/response_object.py">ResponseObject</a></code>
- <code title="get /v1/openai/v1/responses">client.responses.<a href="./src/llama_stack_client/resources/responses/responses.py">list</a>(\*\*<a href="src/llama_stack_client/types/response_list_params.py">params</a>) -> <a href="./src/llama_stack_client/types/response_list_response.py">ResponseListResponse</a></code>
- <code title="get /v1/openai/v1/responses">client.responses.<a href="./src/llama_stack_client/resources/responses/responses.py">list</a>(\*\*<a href="src/llama_stack_client/types/response_list_params.py">params</a>) -> <a href="./src/llama_stack_client/types/response_list_response.py">SyncOpenAICursorPage[ResponseListResponse]</a></code>

## InputItems

Expand Down Expand Up @@ -290,7 +290,7 @@ Methods:

- <code title="post /v1/openai/v1/chat/completions">client.chat.completions.<a href="./src/llama_stack_client/resources/chat/completions.py">create</a>(\*\*<a href="src/llama_stack_client/types/chat/completion_create_params.py">params</a>) -> <a href="./src/llama_stack_client/types/chat/completion_create_response.py">CompletionCreateResponse</a></code>
- <code title="get /v1/openai/v1/chat/completions/{completion_id}">client.chat.completions.<a href="./src/llama_stack_client/resources/chat/completions.py">retrieve</a>(completion_id) -> <a href="./src/llama_stack_client/types/chat/completion_retrieve_response.py">CompletionRetrieveResponse</a></code>
- <code title="get /v1/openai/v1/chat/completions">client.chat.completions.<a href="./src/llama_stack_client/resources/chat/completions.py">list</a>(\*\*<a href="src/llama_stack_client/types/chat/completion_list_params.py">params</a>) -> <a href="./src/llama_stack_client/types/chat/completion_list_response.py">CompletionListResponse</a></code>
- <code title="get /v1/openai/v1/chat/completions">client.chat.completions.<a href="./src/llama_stack_client/resources/chat/completions.py">list</a>(\*\*<a href="src/llama_stack_client/types/chat/completion_list_params.py">params</a>) -> <a href="./src/llama_stack_client/types/chat/completion_list_response.py">SyncOpenAICursorPage[CompletionListResponse]</a></code>

# Completions

Expand Down Expand Up @@ -355,7 +355,7 @@ Methods:
- <code title="post /v1/openai/v1/vector_stores">client.vector_stores.<a href="./src/llama_stack_client/resources/vector_stores/vector_stores.py">create</a>(\*\*<a href="src/llama_stack_client/types/vector_store_create_params.py">params</a>) -> <a href="./src/llama_stack_client/types/vector_store.py">VectorStore</a></code>
- <code title="get /v1/openai/v1/vector_stores/{vector_store_id}">client.vector_stores.<a href="./src/llama_stack_client/resources/vector_stores/vector_stores.py">retrieve</a>(vector_store_id) -> <a href="./src/llama_stack_client/types/vector_store.py">VectorStore</a></code>
- <code title="post /v1/openai/v1/vector_stores/{vector_store_id}">client.vector_stores.<a href="./src/llama_stack_client/resources/vector_stores/vector_stores.py">update</a>(vector_store_id, \*\*<a href="src/llama_stack_client/types/vector_store_update_params.py">params</a>) -> <a href="./src/llama_stack_client/types/vector_store.py">VectorStore</a></code>
- <code title="get /v1/openai/v1/vector_stores">client.vector_stores.<a href="./src/llama_stack_client/resources/vector_stores/vector_stores.py">list</a>(\*\*<a href="src/llama_stack_client/types/vector_store_list_params.py">params</a>) -> <a href="./src/llama_stack_client/types/list_vector_stores_response.py">ListVectorStoresResponse</a></code>
- <code title="get /v1/openai/v1/vector_stores">client.vector_stores.<a href="./src/llama_stack_client/resources/vector_stores/vector_stores.py">list</a>(\*\*<a href="src/llama_stack_client/types/vector_store_list_params.py">params</a>) -> <a href="./src/llama_stack_client/types/vector_store.py">SyncOpenAICursorPage[VectorStore]</a></code>
- <code title="delete /v1/openai/v1/vector_stores/{vector_store_id}">client.vector_stores.<a href="./src/llama_stack_client/resources/vector_stores/vector_stores.py">delete</a>(vector_store_id) -> <a href="./src/llama_stack_client/types/vector_store_delete_response.py">VectorStoreDeleteResponse</a></code>
- <code title="post /v1/openai/v1/vector_stores/{vector_store_id}/search">client.vector_stores.<a href="./src/llama_stack_client/resources/vector_stores/vector_stores.py">search</a>(vector_store_id, \*\*<a href="src/llama_stack_client/types/vector_store_search_params.py">params</a>) -> <a href="./src/llama_stack_client/types/vector_store_search_response.py">VectorStoreSearchResponse</a></code>

Expand All @@ -366,7 +366,6 @@ Types:
```python
from llama_stack_client.types.vector_stores import (
VectorStoreFile,
FileListResponse,
FileDeleteResponse,
FileContentResponse,
)
Expand All @@ -377,7 +376,7 @@ Methods:
- <code title="post /v1/openai/v1/vector_stores/{vector_store_id}/files">client.vector_stores.files.<a href="./src/llama_stack_client/resources/vector_stores/files.py">create</a>(vector_store_id, \*\*<a href="src/llama_stack_client/types/vector_stores/file_create_params.py">params</a>) -> <a href="./src/llama_stack_client/types/vector_stores/vector_store_file.py">VectorStoreFile</a></code>
- <code title="get /v1/openai/v1/vector_stores/{vector_store_id}/files/{file_id}">client.vector_stores.files.<a href="./src/llama_stack_client/resources/vector_stores/files.py">retrieve</a>(file_id, \*, vector_store_id) -> <a href="./src/llama_stack_client/types/vector_stores/vector_store_file.py">VectorStoreFile</a></code>
- <code title="post /v1/openai/v1/vector_stores/{vector_store_id}/files/{file_id}">client.vector_stores.files.<a href="./src/llama_stack_client/resources/vector_stores/files.py">update</a>(file_id, \*, vector_store_id, \*\*<a href="src/llama_stack_client/types/vector_stores/file_update_params.py">params</a>) -> <a href="./src/llama_stack_client/types/vector_stores/vector_store_file.py">VectorStoreFile</a></code>
- <code title="get /v1/openai/v1/vector_stores/{vector_store_id}/files">client.vector_stores.files.<a href="./src/llama_stack_client/resources/vector_stores/files.py">list</a>(vector_store_id, \*\*<a href="src/llama_stack_client/types/vector_stores/file_list_params.py">params</a>) -> <a href="./src/llama_stack_client/types/vector_stores/file_list_response.py">FileListResponse</a></code>
- <code title="get /v1/openai/v1/vector_stores/{vector_store_id}/files">client.vector_stores.files.<a href="./src/llama_stack_client/resources/vector_stores/files.py">list</a>(vector_store_id, \*\*<a href="src/llama_stack_client/types/vector_stores/file_list_params.py">params</a>) -> <a href="./src/llama_stack_client/types/vector_stores/vector_store_file.py">SyncOpenAICursorPage[VectorStoreFile]</a></code>
- <code title="delete /v1/openai/v1/vector_stores/{vector_store_id}/files/{file_id}">client.vector_stores.files.<a href="./src/llama_stack_client/resources/vector_stores/files.py">delete</a>(file_id, \*, vector_store_id) -> <a href="./src/llama_stack_client/types/vector_stores/file_delete_response.py">FileDeleteResponse</a></code>
- <code title="get /v1/openai/v1/vector_stores/{vector_store_id}/files/{file_id}/content">client.vector_stores.files.<a href="./src/llama_stack_client/resources/vector_stores/files.py">content</a>(file_id, \*, vector_store_id) -> <a href="./src/llama_stack_client/types/vector_stores/file_content_response.py">FileContentResponse</a></code>

Expand Down Expand Up @@ -589,6 +588,6 @@ Methods:

- <code title="post /v1/openai/v1/files">client.files.<a href="./src/llama_stack_client/resources/files.py">create</a>(\*\*<a href="src/llama_stack_client/types/file_create_params.py">params</a>) -> <a href="./src/llama_stack_client/types/file.py">File</a></code>
- <code title="get /v1/openai/v1/files/{file_id}">client.files.<a href="./src/llama_stack_client/resources/files.py">retrieve</a>(file_id) -> <a href="./src/llama_stack_client/types/file.py">File</a></code>
- <code title="get /v1/openai/v1/files">client.files.<a href="./src/llama_stack_client/resources/files.py">list</a>(\*\*<a href="src/llama_stack_client/types/file_list_params.py">params</a>) -> <a href="./src/llama_stack_client/types/list_files_response.py">ListFilesResponse</a></code>
- <code title="get /v1/openai/v1/files">client.files.<a href="./src/llama_stack_client/resources/files.py">list</a>(\*\*<a href="src/llama_stack_client/types/file_list_params.py">params</a>) -> <a href="./src/llama_stack_client/types/file.py">SyncOpenAICursorPage[File]</a></code>
- <code title="delete /v1/openai/v1/files/{file_id}">client.files.<a href="./src/llama_stack_client/resources/files.py">delete</a>(file_id) -> <a href="./src/llama_stack_client/types/delete_file_response.py">DeleteFileResponse</a></code>
- <code title="get /v1/openai/v1/files/{file_id}/content">client.files.<a href="./src/llama_stack_client/resources/files.py">content</a>(file_id) -> object</code>
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[project]
name = "llama_stack_client"
version = "0.2.17"
version = "0.2.18-alpha.1"
description = "The official Python library for the llama-stack-client API"
dynamic = ["readme"]
license = "MIT"
Expand Down
4 changes: 2 additions & 2 deletions scripts/mock
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ echo "==> Starting mock server with URL ${URL}"

# Run prism mock on the given spec
if [ "$1" == "--daemon" ]; then
npm exec --package=@stainless-api/prism-cli@5.8.5 -- prism mock "$URL" &> .prism.log &
npm exec --package=@stainless-api/prism-cli@5.15.0 -- prism mock "$URL" &> .prism.log &

# Wait for server to come online
echo -n "Waiting for server"
Expand All @@ -37,5 +37,5 @@ if [ "$1" == "--daemon" ]; then

echo
else
npm exec --package=@stainless-api/prism-cli@5.8.5 -- prism mock "$URL"
npm exec --package=@stainless-api/prism-cli@5.15.0 -- prism mock "$URL"
fi
61 changes: 61 additions & 0 deletions scripts/test
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
#!/usr/bin/env bash

set -e

cd "$(dirname "$0")/.."

RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[0;33m'
NC='\033[0m' # No Color

function prism_is_running() {
curl --silent "http://localhost:4010" >/dev/null 2>&1
}

kill_server_on_port() {
pids=$(lsof -t -i tcp:"$1" || echo "")
if [ "$pids" != "" ]; then
kill "$pids"
echo "Stopped $pids."
fi
}

function is_overriding_api_base_url() {
[ -n "$TEST_API_BASE_URL" ]
}

if ! is_overriding_api_base_url && ! prism_is_running ; then
# When we exit this script, make sure to kill the background mock server process
trap 'kill_server_on_port 4010' EXIT

# Start the dev server
./scripts/mock --daemon
fi

if is_overriding_api_base_url ; then
echo -e "${GREEN}✔ Running tests against ${TEST_API_BASE_URL}${NC}"
echo
elif ! prism_is_running ; then
echo -e "${RED}ERROR:${NC} The test suite will not run without a mock Prism server"
echo -e "running against your OpenAPI spec."
echo
echo -e "To run the server, pass in the path or url of your OpenAPI"
echo -e "spec to the prism command:"
echo
echo -e " \$ ${YELLOW}npm exec --package=@stainless-api/prism-cli@5.15.0 -- prism mock path/to/your.openapi.yml${NC}"
echo

exit 1
else
echo -e "${GREEN}✔ Mock prism server is running with your OpenAPI spec${NC}"
echo
fi

export DEFER_PYDANTIC_BUILD=false

echo "==> Running tests"
rye run pytest "$@"

echo "==> Running Pydantic v1 tests"
rye run nox -s test-pydantic-v1 -- "$@"
60 changes: 59 additions & 1 deletion src/llama_stack_client/pagination.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@

from ._base_client import BasePage, PageInfo, BaseSyncPage, BaseAsyncPage

__all__ = ["SyncDatasetsIterrows", "AsyncDatasetsIterrows"]
__all__ = ["SyncDatasetsIterrows", "AsyncDatasetsIterrows", "SyncOpenAICursorPage", "AsyncOpenAICursorPage"]

_T = TypeVar("_T")

Expand Down Expand Up @@ -48,3 +48,61 @@ def next_page_info(self) -> Optional[PageInfo]:
return None

return PageInfo(params={"start_index": next_index})


class SyncOpenAICursorPage(BaseSyncPage[_T], BasePage[_T], Generic[_T]):
data: List[_T]
has_more: Optional[bool] = None
last_id: Optional[str] = None

@override
def _get_page_items(self) -> List[_T]:
data = self.data
if not data:
return []
return data

@override
def has_next_page(self) -> bool:
has_more = self.has_more
if has_more is not None and has_more is False:
return False

return super().has_next_page()

@override
def next_page_info(self) -> Optional[PageInfo]:
last_id = self.last_id
if not last_id:
return None

return PageInfo(params={"after": last_id})


class AsyncOpenAICursorPage(BaseAsyncPage[_T], BasePage[_T], Generic[_T]):
data: List[_T]
has_more: Optional[bool] = None
last_id: Optional[str] = None

@override
def _get_page_items(self) -> List[_T]:
data = self.data
if not data:
return []
return data

@override
def has_next_page(self) -> bool:
has_more = self.has_more
if has_more is not None and has_more is False:
return False

return super().has_next_page()

@override
def next_page_info(self) -> Optional[PageInfo]:
last_id = self.last_id
if not last_id:
return None

return PageInfo(params={"after": last_id})
21 changes: 12 additions & 9 deletions src/llama_stack_client/resources/chat/completions.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,8 +18,9 @@
async_to_streamed_response_wrapper,
)
from ..._streaming import Stream, AsyncStream
from ...pagination import SyncOpenAICursorPage, AsyncOpenAICursorPage
from ...types.chat import completion_list_params, completion_create_params
from ..._base_client import make_request_options
from ..._base_client import AsyncPaginator, make_request_options
from ...types.chat_completion_chunk import ChatCompletionChunk
from ...types.chat.completion_list_response import CompletionListResponse
from ...types.chat.completion_create_response import CompletionCreateResponse
Expand Down Expand Up @@ -466,7 +467,7 @@ def list(
extra_query: Query | None = None,
extra_body: Body | None = None,
timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
) -> CompletionListResponse:
) -> SyncOpenAICursorPage[CompletionListResponse]:
"""
List all chat completions.

Expand All @@ -487,8 +488,9 @@ def list(

timeout: Override the client-level default timeout for this request, in seconds
"""
return self._get(
return self._get_api_list(
"/v1/openai/v1/chat/completions",
page=SyncOpenAICursorPage[CompletionListResponse],
options=make_request_options(
extra_headers=extra_headers,
extra_query=extra_query,
Expand All @@ -504,7 +506,7 @@ def list(
completion_list_params.CompletionListParams,
),
),
cast_to=CompletionListResponse,
model=CompletionListResponse,
)


Expand Down Expand Up @@ -933,7 +935,7 @@ async def retrieve(
cast_to=CompletionRetrieveResponse,
)

async def list(
def list(
self,
*,
after: str | NotGiven = NOT_GIVEN,
Expand All @@ -946,7 +948,7 @@ async def list(
extra_query: Query | None = None,
extra_body: Body | None = None,
timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
) -> CompletionListResponse:
) -> AsyncPaginator[CompletionListResponse, AsyncOpenAICursorPage[CompletionListResponse]]:
"""
List all chat completions.

Expand All @@ -967,14 +969,15 @@ async def list(

timeout: Override the client-level default timeout for this request, in seconds
"""
return await self._get(
return self._get_api_list(
"/v1/openai/v1/chat/completions",
page=AsyncOpenAICursorPage[CompletionListResponse],
options=make_request_options(
extra_headers=extra_headers,
extra_query=extra_query,
extra_body=extra_body,
timeout=timeout,
query=await async_maybe_transform(
query=maybe_transform(
{
"after": after,
"limit": limit,
Expand All @@ -984,7 +987,7 @@ async def list(
completion_list_params.CompletionListParams,
),
),
cast_to=CompletionListResponse,
model=CompletionListResponse,
)


Expand Down
Loading
Loading