Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
43 changes: 29 additions & 14 deletions specification/DigitalOcean-public.v2.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3087,27 +3087,39 @@ components:
type: http
scheme: bearer
description: |
## Inference API Authentication
## OAuth Authentication

In order to interact with the DigitalOcean API, you or your application must
authenticate.

The DigitalOcean API handles this through OAuth, an open standard for
authorization. OAuth allows you to delegate access to your account.
Scopes can be used to grant full access, read-only access, or access to
a specific set of endpoints.

You can generate an OAuth token by visiting the [Apps & API](https://cloud.digitalocean.com/account/api/tokens)
section of the DigitalOcean control panel for your account.

The Inference APIs use API access keys for authentication, which are
separate from the DigitalOcean OAuth tokens used by the control-plane API.
An OAuth token functions as a complete authentication request. In effect, it
acts as a substitute for a username and password pair.

Include the key as a Bearer token in the `Authorization` header of each
request. All requests must be made over HTTPS.
Because of this, it is absolutely **essential** that you keep your OAuth
tokens secure. In fact, upon generation, the web interface will only display
each token a single time in order to prevent the token from being compromised.

### Key Types
DigitalOcean access tokens begin with an identifiable prefix in order to
distinguish them from other similar tokens.

| API | Key Type | Key Pattern | How to Obtain |
|-----|----------|-------------|---------------|
| Serverless Inference | Model access key | `sk-do-*` (e.g., `sk-do-v1-abcd1234...`) | Generate in the [AI/ML section](https://cloud.digitalocean.com/gen-ai/inference/keys) of the DigitalOcean control panel |
| Agent Inference | Endpoint access key | Alphanumeric string (e.g., `Abc1Def2Ghi3Jkl4...`) | Provided when provisioning an agent endpoint |
- `dop_v1_` for personal access tokens generated in the control panel
- `doo_v1_` for tokens generated by applications using [the OAuth flow](https://docs.digitalocean.com/reference/api/oauth-api/)
- `dor_v1_` for OAuth refresh tokens

### Authenticate with a Bearer Authorization Header

**Serverless Inference:**

```
curl -X POST -H "Authorization: Bearer $MODEL_ACCESS_KEY" "https://inference.do-ai.run/v1/chat/completions"
curl -X POST -H "Authorization: Bearer $DIGITALOCEAN_TOKEN" "https://inference.do-ai.run/v1/chat/completions"
```

**Agent Inference:**
Expand All @@ -3116,9 +3128,12 @@ components:
curl -X POST -H "Authorization: Bearer $AGENT_ACCESS_KEY" "https://{your-agent-url}.agents.do-ai.run/v1/chat/completions?agent=true"
```

**Note:** These keys are not interchangeable with DigitalOcean OAuth
tokens (`dop_v1_*`, `doo_v1_*`, `dor_v1_*`). OAuth tokens are used
exclusively with the control-plane API at `https://api.digitalocean.com`.
**Note:** Agent Inference APIs use an `agent_access_key` (endpoint access
key) instead of a DigitalOcean OAuth token. The `agent_access_key` is
provided when you provision an agent endpoint and is scoped to that
specific agent. It is not interchangeable with DigitalOcean OAuth tokens
(`dop_v1_*`, `doo_v1_*`, `dor_v1_*`), which are used with Serverless
Inference and the control-plane API at `https://api.digitalocean.com`.

security:
- bearer_auth: []
11 changes: 0 additions & 11 deletions specification/inference_description.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,17 +9,6 @@ introduction: |

These APIs are independent of the main DigitalOcean control-plane API (`https://api.digitalocean.com`).

## Authentication

In order to make requests to the Inference APIs, you must authenticate using a Bearer token.

| API | Authentication | Key Pattern |
|-----|----------------|-------------|
| Serverless Inference | Model access key | `sk-do-*` (e.g., `sk-do-v1-abcd1234efgh5678ijkl9012mnop3456qrst7890uvwx1234yzab5678cdef`) |
| Agent Inference | Endpoint access key | Alphanumeric string (e.g., `Abc1Def2Ghi3Jkl4Mno5Pqr6Stu7Vwx8`) |

**Note:** The Control Plane API (`https://api.digitalocean.com`) uses OAuth tokens for authentication, which is different from the inference API keys.

## Base URLs

| API | Base URL |
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ lang: cURL
source: |-
# Image Generation
curl -X POST \
-H "Authorization: Bearer $MODEL_ACCESS_KEY" \
-H "Authorization: Bearer $DIGITALOCEAN_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"model_id": "fal-ai/flux/schnell",
Expand All @@ -14,7 +14,7 @@ source: |-

# Audio Generation
curl -X POST \
-H "Authorization: Bearer $MODEL_ACCESS_KEY" \
-H "Authorization: Bearer $DIGITALOCEAN_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"model_id": "fal-ai/stable-audio-25/text-to-audio",
Expand All @@ -30,7 +30,7 @@ source: |-

# Text-to-Speech
curl -X POST \
-H "Authorization: Bearer $MODEL_ACCESS_KEY" \
-H "Authorization: Bearer $DIGITALOCEAN_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"model_id": "fal-ai/elevenlabs/tts/multilingual-v2",
Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
lang: cURL
source: |-
curl -sS -X POST "https://inference.do-ai.run/v1/batches/0e9d1d35-3d1e-4d66-9a2f-8c7e0f6b3e21/cancel" \
-H "Authorization: Bearer $MODEL_ACCESS_KEY" \
-H "Authorization: Bearer $DIGITALOCEAN_TOKEN" \
-H "Content-Type: application/json" | jq
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ lang: cURL
source: |-
# OpenAI provider - Chat Completions
curl -sS -X POST "https://inference.do-ai.run/v1/batches" \
-H "Authorization: Bearer $MODEL_ACCESS_KEY" \
-H "Authorization: Bearer $DIGITALOCEAN_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"file_id": "a1b2c3d4-e5f6-4789-90ab-cdef12345678",
Expand All @@ -14,7 +14,7 @@ source: |-

# Anthropic provider - Messages
curl -sS -X POST "https://inference.do-ai.run/v1/batches" \
-H "Authorization: Bearer $MODEL_ACCESS_KEY" \
-H "Authorization: Bearer $DIGITALOCEAN_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"file_id": "a1b2c3d4-e5f6-4789-90ab-cdef12345678",
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
lang: cURL
source: |-
curl -sS -X POST "https://inference.do-ai.run/v1/batches/files" \
-H "Authorization: Bearer $MODEL_ACCESS_KEY" \
-H "Authorization: Bearer $DIGITALOCEAN_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"file_name": "batch_requests.jsonl"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,6 @@ lang: cURL
source: |-
curl -X POST \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $MODEL_ACCESS_KEY" \
-H "Authorization: Bearer $DIGITALOCEAN_TOKEN" \
-d '{"messages": [{"role": "user", "content": "What is the capital of Portugal?"}], "model": "meta-llama/Meta-Llama-3.1-8B-Instruct"}' \
"https://inference.do-ai.run/v1/chat/completions"
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,6 @@ lang: cURL
source: |-
curl -X POST \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $MODEL_ACCESS_KEY" \
-H "Authorization: Bearer $DIGITALOCEAN_TOKEN" \
-d '{"model":"qwen3-embedding-0.6b","input":["hello world","goodbye world"],"encoding_format":"float","user":"user-1234"}' \
"https://inference.do-ai.run/v1/embeddings"
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,6 @@ lang: cURL
source: |-
curl -X POST \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $MODEL_ACCESS_KEY" \
-H "Authorization: Bearer $DIGITALOCEAN_TOKEN" \
-d '{"prompt": "A cute baby sea otter floating on its back in calm blue water", "model": "openai-gpt-image-1", "size": "auto", "quality": "auto"}' \
"https://inference.do-ai.run/v1/images/generations"
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,6 @@ lang: cURL
source: |-
curl -X POST \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $MODEL_ACCESS_KEY" \
-H "Authorization: Bearer $DIGITALOCEAN_TOKEN" \
-d '{"model": "claude-opus-4-6", "max_tokens": 1024, "messages": [{"role": "user", "content": "What is the capital of Portugal?"}]}' \
"https://inference.do-ai.run/v1/messages"
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
lang: cURL
source: |-
curl -sS -X POST \
-H "Authorization: Bearer $MODEL_ACCESS_KEY" \
-H "Authorization: Bearer $DIGITALOCEAN_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"model": "openai-gpt-oss-20b",
Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
lang: cURL
source: |-
curl -sS -X GET "https://inference.do-ai.run/v1/batches/0e9d1d35-3d1e-4d66-9a2f-8c7e0f6b3e21" \
-H "Authorization: Bearer $MODEL_ACCESS_KEY" \
-H "Authorization: Bearer $DIGITALOCEAN_TOKEN" \
-H "Content-Type: application/json"
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
lang: cURL
source: |-
curl -sS -X GET "https://inference.do-ai.run/v1/batches/0e9d1d35-3d1e-4d66-9a2f-8c7e0f6b3e21/results" \
-H "Authorization: Bearer $MODEL_ACCESS_KEY" \
-H "Authorization: Bearer $DIGITALOCEAN_TOKEN" \
-H "Content-Type: application/json" | jq
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
lang: cURL
source: |-
curl -sS -X GET "https://inference.do-ai.run/v1/batches?limit=20" \
-H "Authorization: Bearer $MODEL_ACCESS_KEY" \
-H "Authorization: Bearer $DIGITALOCEAN_TOKEN" \
-H "Content-Type: application/json" | jq
Original file line number Diff line number Diff line change
Expand Up @@ -2,5 +2,5 @@ lang: cURL
source: |-
curl -X GET \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $MODEL_ACCESS_KEY" \
-H "Authorization: Bearer $DIGITALOCEAN_TOKEN" \
"https://inference.do-ai.run/v1/models"
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ source: |-
import { InferenceClient } from "@digitalocean/dots";

const client = new InferenceClient({
apiKey: process.env.MODEL_ACCESS_KEY,
apiKey: process.env.DIGITALOCEAN_TOKEN,
});

// Image Generation
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ source: |-
import { InferenceClient } from "@digitalocean/dots";

const client = new InferenceClient({
apiKey: process.env.MODEL_ACCESS_KEY,
apiKey: process.env.DIGITALOCEAN_TOKEN,
});

const completion = await client.chat.completions.create({
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ source: |-
import { InferenceClient } from "@digitalocean/dots";

const client = new InferenceClient({
apiKey: process.env.MODEL_ACCESS_KEY,
apiKey: process.env.DIGITALOCEAN_TOKEN,
});

const resp = await client.embeddings.create({
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ source: |-
import { InferenceClient } from "@digitalocean/dots";

const client = new InferenceClient({
apiKey: process.env.MODEL_ACCESS_KEY,
apiKey: process.env.DIGITALOCEAN_TOKEN,
});

const resp = await client.images.generate({
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ source: |-
import { InferenceClient } from "@digitalocean/dots";

const client = new InferenceClient({
apiKey: process.env.MODEL_ACCESS_KEY,
apiKey: process.env.DIGITALOCEAN_TOKEN,
});

const resp = await client.messages.create({
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ source: |-
import { InferenceClient } from "@digitalocean/dots";

const client = new InferenceClient({
apiKey: process.env.MODEL_ACCESS_KEY,
apiKey: process.env.DIGITALOCEAN_TOKEN,
});

const resp = await client.responses.create({
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ source: |-
import { InferenceClient } from "@digitalocean/dots";

const client = new InferenceClient({
apiKey: process.env.MODEL_ACCESS_KEY,
apiKey: process.env.DIGITALOCEAN_TOKEN,
});

const resp = await client.models.list();
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ source: |-
import os
from pydo import Client

client = Client(token=os.environ.get("MODEL_ACCESS_KEY"))
client = Client(token=os.environ.get("DIGITALOCEAN_TOKEN"))

# Image Generation
resp = client.async_images.generate(
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ source: |-
import os
from pydo import Client

client = Client(token=os.environ.get("MODEL_ACCESS_KEY"))
client = Client(token=os.environ.get("DIGITALOCEAN_TOKEN"))

resp = client.chat.completions.create(
model="llama3.3-70b-instruct",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ source: |-
import os
from pydo import Client

client = Client(token=os.environ.get("MODEL_ACCESS_KEY"))
client = Client(token=os.environ.get("DIGITALOCEAN_TOKEN"))

resp = client.embeddings.create(
model="qwen3-embedding-0.6b",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ source: |-
import os
from pydo import Client

client = Client(token=os.environ.get("MODEL_ACCESS_KEY"))
client = Client(token=os.environ.get("DIGITALOCEAN_TOKEN"))

resp = client.images.generate(
model="openai-gpt-image-1",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ source: |-
import os
from pydo import Client

client = Client(token=os.environ.get("MODEL_ACCESS_KEY"))
client = Client(token=os.environ.get("DIGITALOCEAN_TOKEN"))

resp = client.messages.create(
model="claude-opus-4-6",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ source: |-
import os
from pydo import Client

client = Client(token=os.environ.get("MODEL_ACCESS_KEY"))
client = Client(token=os.environ.get("DIGITALOCEAN_TOKEN"))

resp = client.responses.create(
model="openai-gpt-oss-20b",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ source: |-
import os
from pydo import Client

client = Client(token=os.environ.get("MODEL_ACCESS_KEY"))
client = Client(token=os.environ.get("DIGITALOCEAN_TOKEN"))

resp = client.models.list()

Expand Down
Loading