Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
22 commits
Select commit Hold shift + click to select a range
dcbe442
feat(api): update via SDK Studio
stainless-app[bot] Jun 25, 2025
58d7319
chore(internal): codegen related update
stainless-app[bot] Jun 25, 2025
bd1b989
codegen metadata
stainless-app[bot] Jun 25, 2025
891d6b3
feat(api): update via SDK Studio
stainless-app[bot] Jun 25, 2025
1c702b3
feat(api): update via SDK Studio
stainless-app[bot] Jun 25, 2025
7e5029e
codegen metadata
stainless-app[bot] Jun 25, 2025
1daa3f5
feat(api): update via SDK Studio
stainless-app[bot] Jun 25, 2025
e5ce590
feat(api): update via SDK Studio
stainless-app[bot] Jun 25, 2025
abe573f
feat(api): update via SDK Studio
stainless-app[bot] Jun 25, 2025
9a45427
feat(api): update via SDK Studio
stainless-app[bot] Jun 25, 2025
299fd1b
feat(api): update via SDK Studio
stainless-app[bot] Jun 25, 2025
98424f4
feat(api): update via SDK Studio
stainless-app[bot] Jun 25, 2025
1ae76f7
feat(api): update via SDK Studio
stainless-app[bot] Jun 25, 2025
66d146a
codegen metadata
stainless-app[bot] Jun 25, 2025
45c4a68
codegen metadata
stainless-app[bot] Jun 25, 2025
8d87001
feat(api): define api links and meta as shared models
stainless-app[bot] Jun 25, 2025
e92c54b
feat(api): update OpenAI spec and add endpoint/smodels
stainless-app[bot] Jun 25, 2025
5d38e2e
feat: use inference key for chat.completions.create()
dgellow Jun 26, 2025
4d2b3dc
fix(ci): release-doctor — report correct token name
stainless-app[bot] Jun 27, 2025
47fdf38
Update src/do_gradientai/resources/chat/completions.py
dgellow Jun 27, 2025
20fa200
Merge pull request #1 from stainless-sdks/sam/per-endpoint-api-key
paperspace-philip Jun 27, 2025
dc5f45a
release: 0.1.0-alpha.5
stainless-app[bot] Jun 27, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .release-please-manifest.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{
".": "0.1.0-alpha.4"
".": "0.1.0-alpha.5"
}
8 changes: 4 additions & 4 deletions .stats.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
configured_endpoints: 70
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/digitalocean%2Fgradientai-e40feaac59c85aace6aa42d2749b20e0955dbbae58b06c3a650bc03adafcd7b5.yml
openapi_spec_hash: 825c1a4816938e9f594b7a8c06692667
config_hash: 211ece2994c6ac52f84f78ee56c1097a
configured_endpoints: 77
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/digitalocean%2Fgradientai-e8b3cbc80e18e4f7f277010349f25e1319156704f359911dc464cc21a0d077a6.yml
openapi_spec_hash: c773d792724f5647ae25a5ae4ccec208
config_hash: ecf128ea21a8fead9dabb9609c4dbce8
31 changes: 31 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,36 @@
# Changelog

## 0.1.0-alpha.5 (2025-06-27)

Full Changelog: [v0.1.0-alpha.4...v0.1.0-alpha.5](https://github.com/digitalocean/gradientai-python/compare/v0.1.0-alpha.4...v0.1.0-alpha.5)

### Features

* **api:** define api links and meta as shared models ([8d87001](https://github.com/digitalocean/gradientai-python/commit/8d87001b51de17dd1a36419c0e926cef119f20b8))
* **api:** update OpenAI spec and add endpoint/smodels ([e92c54b](https://github.com/digitalocean/gradientai-python/commit/e92c54b05f1025b6173945524724143fdafc7728))
* **api:** update via SDK Studio ([1ae76f7](https://github.com/digitalocean/gradientai-python/commit/1ae76f78ce9e74f8fd555e3497299127e9aa6889))
* **api:** update via SDK Studio ([98424f4](https://github.com/digitalocean/gradientai-python/commit/98424f4a2c7e00138fb5eecf94ca72e2ffcc1212))
* **api:** update via SDK Studio ([299fd1b](https://github.com/digitalocean/gradientai-python/commit/299fd1b29b42f6f2581150e52dcf65fc73270862))
* **api:** update via SDK Studio ([9a45427](https://github.com/digitalocean/gradientai-python/commit/9a45427678644c34afe9792a2561f394718e64ff))
* **api:** update via SDK Studio ([abe573f](https://github.com/digitalocean/gradientai-python/commit/abe573fcc2233c7d71f0a925eea8fa9dd4d0fb91))
* **api:** update via SDK Studio ([e5ce590](https://github.com/digitalocean/gradientai-python/commit/e5ce59057792968892317215078ac2c11e811812))
* **api:** update via SDK Studio ([1daa3f5](https://github.com/digitalocean/gradientai-python/commit/1daa3f55a49b5411d1b378fce30aea3ccbccb6d7))
* **api:** update via SDK Studio ([1c702b3](https://github.com/digitalocean/gradientai-python/commit/1c702b340e4fd723393c0f02df2a87d03ca8c9bb))
* **api:** update via SDK Studio ([891d6b3](https://github.com/digitalocean/gradientai-python/commit/891d6b32e5bdb07d23abf898cec17a60ee64f99d))
* **api:** update via SDK Studio ([dcbe442](https://github.com/digitalocean/gradientai-python/commit/dcbe442efc67554e60b3b28360a4d9f7dcbb313a))
* use inference key for chat.completions.create() ([5d38e2e](https://github.com/digitalocean/gradientai-python/commit/5d38e2eb8604a0a4065d146ba71aa4a5a0e93d85))


### Bug Fixes

* **ci:** release-doctor — report correct token name ([4d2b3dc](https://github.com/digitalocean/gradientai-python/commit/4d2b3dcefdefc3830d631c5ac27b58778a299983))


### Chores

* clean up pyproject ([78637e9](https://github.com/digitalocean/gradientai-python/commit/78637e99816d459c27b4f2fd2f6d79c8d32ecfbe))
* **internal:** codegen related update ([58d7319](https://github.com/digitalocean/gradientai-python/commit/58d7319ce68c639c2151a3e96a5d522ec06ff96f))

## 0.1.0-alpha.4 (2025-06-25)

Full Changelog: [v0.1.0-alpha.3...v0.1.0-alpha.4](https://github.com/digitalocean/gradientai-python/compare/v0.1.0-alpha.3...v0.1.0-alpha.4)
Expand Down
2 changes: 1 addition & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ $ pip install -r requirements-dev.lock

Most of the SDK is generated code. Modifications to code will be persisted between generations, but may
result in merge conflicts between manual patches and changes from the generator. The generator will never
modify the contents of the `src/gradientai/lib/` and `examples/` directories.
modify the contents of the `src/do_gradientai/lib/` and `examples/` directories.

## Adding and running examples

Expand Down
93 changes: 59 additions & 34 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,16 +25,22 @@ The full API of this library can be found in [api.md](api.md).

```python
import os
from gradientai import GradientAI
from do_gradientai import GradientAI

client = GradientAI(
api_key=os.environ.get("GRADIENTAI_API_KEY"), # This is the default and can be omitted
)

versions = client.agents.versions.list(
uuid="REPLACE_ME",
completion = client.chat.completions.create(
messages=[
{
"content": "string",
"role": "system",
}
],
model="llama3-8b-instruct",
)
print(versions.agent_versions)
print(completion.id)
```

While you can provide an `api_key` keyword argument,
Expand All @@ -49,18 +55,24 @@ Simply import `AsyncGradientAI` instead of `GradientAI` and use `await` with eac
```python
import os
import asyncio
from gradientai import AsyncGradientAI
from do_gradientai import AsyncGradientAI

client = AsyncGradientAI(
api_key=os.environ.get("GRADIENTAI_API_KEY"), # This is the default and can be omitted
)


async def main() -> None:
versions = await client.agents.versions.list(
uuid="REPLACE_ME",
completion = await client.chat.completions.create(
messages=[
{
"content": "string",
"role": "system",
}
],
model="llama3-8b-instruct",
)
print(versions.agent_versions)
print(completion.id)


asyncio.run(main())
Expand All @@ -84,19 +96,25 @@ Then you can enable it by instantiating the client with `http_client=DefaultAioH
```python
import os
import asyncio
from gradientai import DefaultAioHttpClient
from gradientai import AsyncGradientAI
from do_gradientai import DefaultAioHttpClient
from do_gradientai import AsyncGradientAI


async def main() -> None:
async with AsyncGradientAI(
api_key=os.environ.get("GRADIENTAI_API_KEY"), # This is the default and can be omitted
http_client=DefaultAioHttpClient(),
) as client:
versions = await client.agents.versions.list(
uuid="REPLACE_ME",
completion = await client.chat.completions.create(
messages=[
{
"content": "string",
"role": "system",
}
],
model="llama3-8b-instruct",
)
print(versions.agent_versions)
print(completion.id)


asyncio.run(main())
Expand All @@ -116,41 +134,48 @@ Typed requests and responses provide autocomplete and documentation within your
Nested parameters are dictionaries, typed using `TypedDict`, for example:

```python
from gradientai import GradientAI
from do_gradientai import GradientAI

client = GradientAI()

evaluation_test_case = client.regions.evaluation_test_cases.create(
star_metric={},
completion = client.chat.completions.create(
messages=[
{
"content": "string",
"role": "system",
}
],
model="llama3-8b-instruct",
stream_options={},
)
print(evaluation_test_case.star_metric)
print(completion.stream_options)
```

## Handling errors

When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of `gradientai.APIConnectionError` is raised.
When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of `do_gradientai.APIConnectionError` is raised.

When the API returns a non-success status code (that is, 4xx or 5xx
response), a subclass of `gradientai.APIStatusError` is raised, containing `status_code` and `response` properties.
response), a subclass of `do_gradientai.APIStatusError` is raised, containing `status_code` and `response` properties.

All errors inherit from `gradientai.APIError`.
All errors inherit from `do_gradientai.APIError`.

```python
import gradientai
from gradientai import GradientAI
import do_gradientai
from do_gradientai import GradientAI

client = GradientAI()

try:
client.agents.versions.list(
uuid="REPLACE_ME",
)
except gradientai.APIConnectionError as e:
except do_gradientai.APIConnectionError as e:
print("The server could not be reached")
print(e.__cause__) # an underlying Exception, likely raised within httpx.
except gradientai.RateLimitError as e:
except do_gradientai.RateLimitError as e:
print("A 429 status code was received; we should back off a bit.")
except gradientai.APIStatusError as e:
except do_gradientai.APIStatusError as e:
print("Another non-200-range status code was received")
print(e.status_code)
print(e.response)
Expand Down Expand Up @@ -178,7 +203,7 @@ Connection errors (for example, due to a network connectivity problem), 408 Requ
You can use the `max_retries` option to configure or disable retry settings:

```python
from gradientai import GradientAI
from do_gradientai import GradientAI

# Configure the default for all requests:
client = GradientAI(
Expand All @@ -198,7 +223,7 @@ By default requests time out after 1 minute. You can configure this with a `time
which accepts a float or an [`httpx.Timeout`](https://www.python-httpx.org/advanced/timeouts/#fine-tuning-the-configuration) object:

```python
from gradientai import GradientAI
from do_gradientai import GradientAI

# Configure the default for all requests:
client = GradientAI(
Expand Down Expand Up @@ -252,7 +277,7 @@ if response.my_field is None:
The "raw" Response object can be accessed by prefixing `.with_raw_response.` to any HTTP method call, e.g.,

```py
from gradientai import GradientAI
from do_gradientai import GradientAI

client = GradientAI()
response = client.agents.versions.with_raw_response.list(
Expand All @@ -264,9 +289,9 @@ version = response.parse() # get the object that `agents.versions.list()` would
print(version.agent_versions)
```

These methods return an [`APIResponse`](https://github.com/digitalocean/gradientai-python/tree/main/src/gradientai/_response.py) object.
These methods return an [`APIResponse`](https://github.com/digitalocean/gradientai-python/tree/main/src/do_gradientai/_response.py) object.

The async client returns an [`AsyncAPIResponse`](https://github.com/digitalocean/gradientai-python/tree/main/src/gradientai/_response.py) with the same structure, the only difference being `await`able methods for reading the response content.
The async client returns an [`AsyncAPIResponse`](https://github.com/digitalocean/gradientai-python/tree/main/src/do_gradientai/_response.py) with the same structure, the only difference being `await`able methods for reading the response content.

#### `.with_streaming_response`

Expand Down Expand Up @@ -330,7 +355,7 @@ You can directly override the [httpx client](https://www.python-httpx.org/api/#c

```python
import httpx
from gradientai import GradientAI, DefaultHttpxClient
from do_gradientai import GradientAI, DefaultHttpxClient

client = GradientAI(
# Or use the `GRADIENT_AI_BASE_URL` env var
Expand All @@ -353,7 +378,7 @@ client.with_options(http_client=DefaultHttpxClient(...))
By default the library closes underlying HTTP connections whenever the client is [garbage collected](https://docs.python.org/3/reference/datamodel.html#object.__del__). You can manually close the client using the `.close()` method if desired, or with a context manager that closes when exiting.

```py
from gradientai import GradientAI
from do_gradientai import GradientAI

with GradientAI() as client:
# make requests here
Expand Down Expand Up @@ -381,8 +406,8 @@ If you've upgraded to the latest version but aren't seeing any new features you
You can determine the version that is being used at runtime with:

```py
import gradientai
print(gradientai.__version__)
import do_gradientai
print(do_gradientai.__version__)
```

## Requirements
Expand Down
Loading