Skip to content

Commit

Permalink
docs: improve custom provider guide (#1554)
Browse files Browse the repository at this point in the history
  • Loading branch information
lgrammel committed May 10, 2024
1 parent 59288ed commit 35a3d8e
Showing 1 changed file with 35 additions and 14 deletions.
49 changes: 35 additions & 14 deletions content/providers/03-community-providers/01-custom-providers.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -6,22 +6,32 @@ description: Learn how to write a custom provider for the Vercel AI SDK
# Writing a Custom Provider

The Vercel AI SDK provides a [Language Model Specification](https://github.com/vercel/ai/tree/main/packages/provider/src/language-model/v1).
You can write your own provider that adheres to the specification and it will be compatible with the AI SDK.
You can write your own provider that adheres to the specification and it will be compatible with the Vercel AI SDK.

You can find the Language Model Specification in the [AI SDK repository](https://github.com/vercel/ai/tree/main/packages/provider/src/language-model/v1).
It can be imported from `'@ai-sdk/provider'`.

We also provide utilities that make it easier to implement a custom provider. You can find them in the `@ai-sdk/provider-utils` package ([source code](https://github.com/vercel/ai/tree/main/packages/provider-utils)).

There are several reference implementations, e.g. a [Mistral reference implementation](https://github.com/vercel/ai/tree/main/packages/mistral).
It can be imported from `'@ai-sdk/provider'`. We also provide utilities that make it easier to implement a custom provider. You can find them in the `@ai-sdk/provider-utils` package ([source code](https://github.com/vercel/ai/tree/main/packages/provider-utils)).

<Note>
If you open-source a provider, we'd love to promote it here. Please send us a
PR to add it to the [Community Providers](/providers/community-providers)
section.
</Note>

## Provider Entry Point
## Provider Implementation Guide

Implementing a custom language model provider involves several steps:

- Creating an entry point
- Adding a language model implementation
- Mapping the input (prompt, tools, settings)
- Processing the results (generate, streaming, tool calls)
- Supporting object generation

The best way to get started is to copy a reference implementation and modify it to fit your needs.
Check out the [Mistral reference implementation](https://github.com/vercel/ai/tree/main/packages/mistral)
to see how the project is structured, and feel free to copy the setup.

### Creating an Entry Point

Each AI SDK provider should follow the pattern of using a factory function that returns a provider instance
and provide a default instance.
Expand Down Expand Up @@ -114,19 +124,30 @@ export function createCustomProvider(
export const customProvider = createCustomProvider();
```

## Language Model Implementation
### Implementing the Language Model

A [language model](https://github.com/vercel/ai/blob/main/packages/provider/src/language-model/v1/language-model-v1.ts) needs to implement:

- metadata fields
- `specificationVersion: 'v1'` - always `'v1'`
- `provider: string` - name of the provider
- `modelId: string` - unique identifier of the model
- `defaultObjectGenerationMode` - default object generation mode, e.g. "json"
- `doGenerate` method
- `doStream` method

Please refer to the Language Model Specification in the [AI SDK repository](https://github.com/vercel/ai/tree/main/packages/provider/src/language-model/v1).
Check out the [Mistral language model](https://github.com/vercel/ai/blob/main/packages/mistral/src/mistral-chat-language-model.ts) as an example.

We also provide utilities that make it easier to implement a custom provider. You can find them in the `@ai-sdk/provider-utils` package ([source code](https://github.com/vercel/ai/tree/main/packages/provider-utils)).
At a high level, both `doGenerate` and `doStream` methods should:

There are several reference implementations, e.g. a [OpenAI reference implementation](https://github.com/vercel/ai/tree/main/packages/openai)
and a [Mistral reference implementation](https://github.com/vercel/ai/tree/main/packages/mistral).
1. **Map the prompt and the settings to the format required by the provider API.** This can be extracted, e.g. the Mistral provider contains a `getArgs` method.
2. **Call the provider API.** You could e.g. use fetch calls or a library offered by the provider.
3. **Process the results.** You need to convert the response to the format required by the AI SDK.

## Errors
### Errors

The AI SDK provides [standardized errors](https://github.com/vercel/ai/tree/main/packages/provider/src/errors) that should be used by providers where possible. This will make it easy for user to debug them.

## Retries, timeouts, and abort signals
### Retries, timeouts, and abort signals

The AI SDK will handle retries, timeouts, and aborting requests in a unified way. The model classes should not implement retries or timeouts themselves. Instead, they should use the `abortSignal` parameter to determine when the call should be aborted, and they should throw `ApiCallErrors` (or similar) with a correct `isRetryable` flag when errors such as network errors occur.

0 comments on commit 35a3d8e

Please sign in to comment.