Skip to content

Commit

Permalink
Update packages and examples to use OpenAI v4 SDK (#438)
Browse files Browse the repository at this point in the history
Co-authored-by: Alex Rattray <rattray.alex@gmail.com>
Co-authored-by: Jared Palmer <jared@jaredpalmer.com>
  • Loading branch information
3 people committed Aug 16, 2023
1 parent 792f67f commit dca1ed9
Show file tree
Hide file tree
Showing 33 changed files with 735 additions and 407 deletions.
10 changes: 10 additions & 0 deletions .changeset/brown-elephants-invite.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
---
'ai': minor
'next-openai-rate-limits': minor
'next-openai': minor
'nuxt-openai': minor
'solidstart-openai': minor
'sveltekit-openai': minor
---

Update packages and examples to use openai@beta-4
4 changes: 3 additions & 1 deletion .changeset/config.json
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,8 @@
"sveltekit-openai",
"nuxt-openai",
"nuxt-langchain",
"solidstart-openai"
"solidstart-openai",
"next-openai-rate-limits",
"next-replicate"
]
}
24 changes: 11 additions & 13 deletions docs/pages/docs/api-reference/openai-stream.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -9,15 +9,15 @@ This works with a variety of OpenAI models, including:
- `gpt-4`, `gpt-4-0314`, `gpt-4-32k`, `gpt-4-32k-0314`, `gpt-3.5-turbo`, `gpt-3.5-turbo-0301`
- `text-davinci-003`, `text-davinci-002`, `text-curie-001`, `text-babbage-001`, `text-ada-001`

It is designed to work with responses from either `openai.createCompletion` or `openai.createChatCompletion` methods. To get the full advantage of streaming capabilities, it's recommended to use the `openai-edge` package, which allows streaming of completions and chats from an edge function.
It is designed to work with responses from either `openai.completions.create` or `openai.chat.completions.create` methods.

Note: The official OpenAI API SDK does not yet support the Edge Runtime and will only work in serverless environments at the moment. The `openai-edge` package is based on `fetch` instead of `axios` (and thus works in the Edge Runtime) so we suggest using it instead.
Note: Prior to v4, the official OpenAI API SDK does not support the Edge Runtime and only works in serverless environments. The `openai-edge` package is based on `fetch` instead of `axios` (and thus works in the Edge Runtime) so we recommend using `openai` v4+ or `openai-edge`.

## Parameters [#parameters]

### `res: Response`

This is the response object returned by either `openai.createCompletion` or `openai.createChatCompletion` methods.
This is the response object returned by either `openai.completions.create` or `openai.chat.completions.create` methods.

### `cb?: AIStreamCallbacks`

Expand All @@ -30,20 +30,19 @@ Below are some examples of how to use `OpenAIStream` with chat and completion mo
### Chat Model Example

```tsx
import { Configuration, OpenAIApi } from 'openai-edge'
import OpenAI from 'openai'
import { OpenAIStream, StreamingTextResponse } from 'ai'

const config = new Configuration({
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY
})
const openai = new OpenAIApi(config)

export const runtime = 'edge'

export async function POST(req: Request) {
const { messages } = await req.json()
// Create a chat completion using OpenAIApi
const response = await openai.createChatCompletion({
// Create a chat completion using OpenAI
const response = await openai.chat.completions.create({
model: 'gpt-4',
stream: true,
messages
Expand All @@ -60,20 +59,19 @@ export async function POST(req: Request) {
### Completion Model Example

```tsx
import { Configuration, OpenAIApi } from 'openai-edge'
import OpenAI from 'openai'
import { OpenAIStream, StreamingTextResponse } from 'ai'

const config = new Configuration({
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY
})
const openai = new OpenAIApi(config)

export const runtime = 'edge'

export async function POST(req: Request) {
const { prompt } = await req.json()
// Create a completion using OpenAIApi
const response = await openai.createCompletion({
// Create a completion using OpenAI
const response = await openai.completions.create({
model: 'text-davinci-003',
stream: true,
prompt
Expand Down
7 changes: 3 additions & 4 deletions docs/pages/docs/api-reference/stream-to-response.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -49,16 +49,15 @@ Here is an example of using `streamToResponse` to pipe an AI stream to a Node.js

```js
import { createServer } from 'http'
import { Configuration, OpenAIApi } from 'openai-edge'
import OpenAI from 'openai'
import { OpenAIStream, streamToResponse } from 'ai'

const config = new Configuration({
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY
})
const openai = new OpenAIApi(config)

const server = createServer((req, res) => {
const aiResponse = await openai.createChatCompletion({
const aiResponse = await openai.chat.completions.create({
model: 'gpt-4',
stream: true,
messages: /* ... */
Expand Down
2 changes: 1 addition & 1 deletion docs/pages/docs/api-reference/streaming-text-response.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ import { OpenAIStream, StreamingTextResponse } from 'ai'
export const runtime = 'edge'

export async function POST() {
const response = await openai.createChatCompletion({
const response = await openai.chat.completions.create({
model: 'gpt-4',
stream: true,
messages: [{ role: 'user', content: 'What is love?' }]
Expand Down
3 changes: 1 addition & 2 deletions docs/pages/docs/api-reference/tokens.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -30,9 +30,8 @@ import { Tokens } from 'ai/react'

export const runtime = 'edge';

// Configuration / OpenAI setup left out for brevity.
export default function Page() {
const response = await openai.createChatCompletion({
const response = await openai.chat.completions.create({
model: 'gpt-3.5-turbo',
stream: true,
messages: [...]
Expand Down
8 changes: 3 additions & 5 deletions docs/pages/docs/api-reference/use-completion.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -491,23 +491,21 @@ The server API formats the prompt for the AI model, and then it uses the [OpenAI
```tsx
// app/api/completion/route.ts

import { Configuration, OpenAIApi } from 'openai-edge'
import OpenAI from 'openai'
import { OpenAIStream, StreamingTextResponse } from 'ai'

export const runtime = 'edge'

const apiConfig = new Configuration({
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY!
})

const openai = new OpenAIApi(apiConfig)

export async function POST(req: Request) {
// Extract the `prompt` from the body of the request
const { prompt } = await req.json()

// Request the OpenAI API for the response based on the prompt
const response = await openai.createChatCompletion({
const response = await openai.chat.completions.create({
model: 'gpt-3.5-turbo',
stream: true,
// a precise prompt is important for the AI to reply with the correct tokens
Expand Down
10 changes: 4 additions & 6 deletions docs/pages/docs/concepts/caching.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -13,18 +13,16 @@ Each stream helper for each provider has special lifecycle callbacks you can use
This example uses [Vercel KV](https://vercel.com/storage/kv) and Next.js to cache the OpenAI response for 1 hour.

```tsx filename="app/api/chat/route.ts"
import { Configuration, OpenAIApi } from 'openai-edge'
import OpenAI from 'openai'
import { OpenAIStream, StreamingTextResponse } from 'ai'
import kv from '@vercel/kv'

export const runtime = 'edge'

const apiConfig = new Configuration({
apiKey: process.env.OPENAI_API_KEY
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY!
})

const openai = new OpenAIApi(apiConfig)

export async function POST(req: Request) {
const { messages } = await req.json()
const key = JSON.stringify(messages) // come up with a key based on the request
Expand Down Expand Up @@ -56,7 +54,7 @@ export async function POST(req: Request) {
// return new StreamingTextResponse(stream);
}

const response = await openai.createChatCompletion({
const response = await openai.chat.completions.create({
// ... omitted for brevity
})

Expand Down
20 changes: 9 additions & 11 deletions docs/pages/docs/getting-started.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -63,10 +63,10 @@ We've written some code to get you started — follow the instructions below to

#### Install Dependencies

Next, we'll install `ai` and `openai-edge`. The latter is preferred over the official OpenAI SDK due to its compatibility with Vercel Edge Functions.
Next, we'll install `ai` and `openai`, OpenAI's official JavaScript SDK compatible with the Vercel Edge Runtime.

```sh
pnpm install ai openai-edge
pnpm install ai openai
```

#### Configure OpenAI API Key
Expand Down Expand Up @@ -94,14 +94,13 @@ We've written some code to get you started — follow the instructions below to
Here's what the route handler should look like:

```tsx filename="app/api/completion/route.ts"
import { Configuration, OpenAIApi } from 'openai-edge';
import OpenAI from 'openai';
import { OpenAIStream, StreamingTextResponse } from 'ai';

// Create an OpenAI API client (that's edge friendly!)
const openAIConfig = new Configuration({
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(openAIConfig);

// Set the runtime to edge for best performance
export const runtime = 'edge';
Expand All @@ -110,7 +109,7 @@ We've written some code to get you started — follow the instructions below to
const { prompt } = await req.json();

// Ask OpenAI for a streaming completion given the prompt
const response = await openai.createCompletion({
const response = await openai.completions.create({
model: 'text-davinci-003',
stream: true,
temperature: 0.6,
Expand All @@ -137,15 +136,14 @@ We've written some code to get you started — follow the instructions below to
Here's what the endpoint should look like:

```tsx filename="src/routes/api/completion/+server.js"
import { Configuration, OpenAIApi } from 'openai-edge';
import OpenAI from 'openai';
import { OpenAIStream, StreamingTextResponse } from 'ai';
import { OPENAI_API_KEY } from '$env/static/private';

// Create an OpenAI API client (that's edge friendly!)
const openAIConfig = new Configuration({
const openai = new OpenAI({
apiKey: OPENAI_API_KEY,
});
const openai = new OpenAIApi(openAIConfig);

// Set the runtime to edge for best performance
export const config = {
Expand All @@ -156,7 +154,7 @@ We've written some code to get you started — follow the instructions below to
const { prompt } = await request.json();

// Ask OpenAI for a streaming completion given the prompt
const response = await openai.createCompletion({
const response = await openai.completions.create({
model: 'text-davinci-003',
stream: true,
temperature: 0.6,
Expand All @@ -178,7 +176,7 @@ We've written some code to get you started — follow the instructions below to
```
</Tab>
</Tabs>
In the above code, the `openai.createCompletion` method gets a response stream from the OpenAI API. We then pass the response into the `OpenAIStream` provided by this library.
In the above code, the `openai.completions.create` method gets a response stream from the OpenAI API. We then pass the response into the `OpenAIStream` provided by this library.
Then we use `StreamingTextResponse` to set the proper headers and response details in order to stream the response back to the client.

#### Wire up a UI
Expand Down
18 changes: 7 additions & 11 deletions docs/pages/docs/guides/frameworks/nextjs-app.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -11,30 +11,28 @@ The Vercel AI SDK has been built with [Next.js App Router](https://nextjs.org/do
Using a [Route Handler](https://nextjs.org/docs/app/building-your-application/routing/router-handlers)
for your API requests is the recommended way to use the Vercel AI SDK with Next.js.

Below is a minimal route handler for using the OpenAI Chat API with `openai-edge` and the Vercel AI SDK. Consult our [guides](/docs/guides) for examples of using other providers.
Below is a minimal route handler for using the OpenAI Chat API with the `openai` API client and the Vercel AI SDK. Consult our [guides](/docs/guides) for examples of using other providers.

```typescript
// app/api/chat/route.ts

import { Configuration, OpenAIApi } from 'openai-edge'
import OpenAI from 'openai'
import { OpenAIStream, StreamingTextResponse } from 'ai'

// Optional, but recommended: run on the edge runtime.
// See https://vercel.com/docs/concepts/functions/edge-functions
export const runtime = 'edge'

const apiConfig = new Configuration({
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY!
})

const openai = new OpenAIApi(apiConfig)

export async function POST(req: Request) {
// Extract the `messages` from the body of the request
const { messages } = await req.json()

// Request the OpenAI API for the response based on the prompt
const response = await openai.createChatCompletion({
const response = await openai.chat.completions.create({
model: 'gpt-3.5-turbo',
stream: true,
messages: messages
Expand All @@ -56,28 +54,26 @@ The Route Handler code can be adapted to work with [Server Components](https://n

```typescript
// app/page.tsx
import { Configuration, OpenAIApi } from 'openai-edge'
import OpenAI from 'openai'
import { OpenAIStream } from 'ai'
import { Suspense } from 'react'

// Optional, but recommended: run on the edge runtime.
// See https://vercel.com/docs/concepts/functions/edge-functions
export const runtime = 'edge'

const apiConfig = new Configuration({
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY!
})

const openai = new OpenAIApi(apiConfig)

export default async function Page({
searchParams
}: {
// note that using searchParams opts your page into dynamic rendering. See https://nextjs.org/docs/app/api-reference/file-conventions/page#searchparams-optional
searchParams: Record<string, string>
}) {
// Request the OpenAI API for the response based on the prompt
const response = await openai.createChatCompletion({
const response = await openai.chat.completions.create({
model: 'gpt-4',
stream: true,
messages: [
Expand Down
17 changes: 7 additions & 10 deletions docs/pages/docs/guides/frameworks/nextjs-pages.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -12,17 +12,16 @@ The Edge Runtime supports the same Response types as in the App Router, so the c

```jsx
import { OpenAIStream, StreamingTextResponse } from 'ai'
import { Configuration, OpenAIApi } from 'openai-edge'
import OpenAI from 'openai'

export const runtime = 'edge'

const config = new Configuration({
apiKey: process.env.OPENAI_API_KEY
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY!
})
const openai = new OpenAIApi(config)

export default async function handler(req: Request, res: Response) {
const response = await openai.createChatCompletion({
const response = await openai.chat.completions.create({
model: 'gpt-3.5-turbo',
stream: true,
messages: [{ role: 'user', content: 'What is love?' }]
Expand All @@ -40,19 +39,17 @@ in place of `StreamingTextResponse`.
```jsx
import { OpenAIStream, streamToResponse } from 'ai'
import { NextApiRequest, NextApiResponse } from 'next'
import { Configuration, OpenAIApi } from 'openai-edge'
import OpenAI from 'openai'

const config = new Configuration({
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY
})

const openai = new OpenAIApi(config)

export default async function handler(
req: NextApiRequest,
res: NextApiResponse
) {
const aiResponse = await openai.createChatCompletion({
const aiResponse = await openai.chat.completions.create({
model: 'gpt-3.5-turbo',
stream: true,
messages: [{ role: 'user', content: 'What is love?' }]
Expand Down
2 changes: 1 addition & 1 deletion docs/pages/docs/guides/frameworks/solidjs.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ export const POST = async (event: APIEvent) => {
const { messages } = await event.request.json()

// Ask OpenAI for a streaming chat completion given the prompt
const response = await openai.createChatCompletion({
const response = await openai.chat.completions.create({
model: 'gpt-3.5-turbo',
stream: true,
messages: messages.map((message: any) => ({
Expand Down

0 comments on commit dca1ed9

Please sign in to comment.