Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -1,11 +1,12 @@
---
updated: 2024-08-19
updated: 2024-11-21
difficulty: Beginner
content_type: 📝 Tutorial
pcx_content_type: tutorial
title: Build a Retrieval Augmented Generation (RAG) AI
products:
- Workers
- D1
- Vectorize
tags:
- AI
Expand All @@ -24,7 +25,7 @@ At the end of this tutorial, you will have built an AI tool that allows you to s

<Render file="prereqs" product="workers" />

You will also need access to [Vectorize](/vectorize/platform/pricing/).
You will also need access to [Vectorize](/vectorize/platform/pricing/). During this tutorial, we will show how you can optionally integrate with [Anthropic Claude](http://anthropic.com) as well. You will need an [Anthropic API key](https://docs.anthropic.com/en/api/getting-started) to do so.

## 1. Create a new Worker project

Expand Down Expand Up @@ -182,7 +183,42 @@ Now, we can add a new note to our database using `wrangler d1 execute`:
npx wrangler d1 execute database --remote --command "INSERT INTO notes (text) VALUES ('The best pizza topping is pepperoni')"
```

## 5. Creating notes and adding them to Vectorize
## 5. Creating a workflow

Before we begin creating notes, we will introduce a [Cloudflare Workflow](/workflows). This will allow us to define a durable workflow that can safely and robustly execute all the steps of the RAG process.

To begin, add a new `[[workflows]]` block to `wrangler.toml`:

```toml
# ... existing wrangler configuration

[[workflows]]
name = "rag"
binding = "RAG_WORKFLOW"
class_name = "RAGWorkflow"
```

In `src/index.js`, add a new class called `RAGWorkflow` that extends `Workflow`:

```js
export class RAGWorkflow {
async run(event, step) {
await step.do('example step', async () => {
console.log("Hello World!")
})
}
}
```

This class will define a single workflow step that will log "Hello World!" to the console. You can add as many steps as you need to your workflow.

On its own, this workflow will not do anything. To execute the workflow, we will call the `RAG_WORKFLOW` binding, passing in any parameters that the workflow needs to properly complete. Here is an example of how we can call the workflow:

```js
env.RAG_WORKFLOW.create({ params: { text } })
```

## 6. Creating notes and adding them to Vectorize

To expand on your Workers function in order to handle multiple routes, we will add `hono`, a routing library for Workers. This will allow us to create a new route for adding notes to our database. Install `hono` using `npm`:

Expand All @@ -207,61 +243,69 @@ app.get("/", async (c) => {
export default app;
```

This will establish a route at the root path `/` that is functionally equivalent to the previous version of your application. Now, we can add a new route for adding notes to our database.
This will establish a route at the root path `/` that is functionally equivalent to the previous version of your application.

Now, we can update our workflow to begin adding notes to our database, and generating the related embeddings for them.

This example features the [`@cf/baai/bge-base-en-v1.5` model](/workers-ai/models/bge-base-en-v1.5/), which can be used to create an embedding. Embeddings are stored and retrieved inside [Vectorize](/vectorize/), Cloudflare's vector database. The user query is also turned into an embedding so that it can be used for searching within Vectorize.

```js
app.post("/notes", async (c) => {
const { text } = await c.req.json();
if (!text) {
return c.text("Missing text", 400);
}

const { results } = await c.env.DB.prepare(
"INSERT INTO notes (text) VALUES (?) RETURNING *",
)
.bind(text)
.run();

const record = results.length ? results[0] : null;

if (!record) {
return c.text("Failed to create note", 500);
export class RAGWorkflow {
async run(event, step) {
const { text } = event.params

const record = await step.do(`create database record`, async () => {
const query = "INSERT INTO notes (text) VALUES (?) RETURNING *"

const { results } = await env.DATABASE.prepare(query)
.bind(text)
.run()

const record = results[0]
if (!record) throw new Error("Failed to create note")
return record;
})

const embedding = await step.do(`generate embedding`, async () => {
const embeddings = await env.AI.run('@cf/baai/bge-base-en-v1.5', { text: text })
const values = embeddings.data[0]
if (!values) throw new Error("Failed to generate vector embedding")
return values
})

await step.do(`insert vector`, async () => {
return env.VECTOR_INDEX.upsert([
{
id: record.id.toString(),
values: embedding,
}
]);
})
}

const { data } = await c.env.AI.run("@cf/baai/bge-base-en-v1.5", {
text: [text],
});
const values = data[0];

if (!values) {
return c.text("Failed to generate vector embedding", 500);
}

const { id } = record;
const inserted = await c.env.VECTOR_INDEX.upsert([
{
id: id.toString(),
values,
},
]);

return c.json({ id, text, inserted });
});
}
```

This function does the following things:
The workflow does the following things:

1. Parse the JSON body of the request to get the `text` field.
1. Accepts a `text` parameter.
2. Insert a new row into the `notes` table in D1, and retrieve the `id` of the new row.
3. Convert the `text` into a vector using the `embeddings` model of the LLM binding.
4. Upsert the `id` and `vectors` into the `vector-index` index in Vectorize.
5. Return the `id` and `text` of the new note as JSON.

By doing this, you will create a new vector representation of the note, which can be used to retrieve the note later.

## 6. Querying Vectorize to retrieve notes
To complete the code, we will add a route that allows users to submit notes to the database. This route will parse the JSON request body, get the `note` parameter, and create a new instance of the workflow, passing the parameter:

```js
app.post('/notes', async (c) => {
const { text } = await c.req.json();
if (!text) return c.text("Missing text", 400);
await c.env.RAG_WORKFLOW.create({ params: { text } })
return c.text("Created note", 201);
})
```

## 7. Querying Vectorize to retrieve notes

To complete your code, you can update the root path (`/`) to query Vectorize. You will convert the query into a vector, and then use the `vector-index` index to find the most similar vectors.

Expand Down Expand Up @@ -319,7 +363,6 @@ app.get('/', async (c) => {
)

return c.text(answer);

});

app.onError((err, c) => {
Expand All @@ -329,7 +372,80 @@ app.onError((err, c) => {
export default app;
```

## 7. Deleting notes and vectors
## 8. Adding Anthropic Claude model (optional)

If you are working with larger documents, you have the option to use Anthropic's [Claude models](https://claude.ai/), which have large context windows and are well-suited to RAG workflows.

To begin, install the `@anthropic-ai/sdk` package:

```sh
npm install @anthropic-ai/sdk
```

In `src/index.js`, you can update the `GET /` route to check for the `ANTHROPIC_API_KEY` environment variable. If it's set, we can generate text using the Anthropic SDK. If it isn't set, we'll fall back to the existing Workers AI code:

```js
import Anthropic from '@anthropic-ai/sdk';

app.get('/', async (c) => {
// ... Existing code
const systemPrompt = `When answering the question or responding, use the context provided, if it is provided and relevant.`

let modelUsed: string = ""
let response = null

if (c.env.ANTHROPIC_API_KEY) {
const anthropic = new Anthropic({
apiKey: c.env.ANTHROPIC_API_KEY
})

const model = "claude-3-5-sonnet-latest"
modelUsed = model

const message = await anthropic.messages.create({
max_tokens: 1024,
model,
messages: [
{ role: 'user', content: question }
],
system: [systemPrompt, notes ? contextMessage : ''].join(" ")
})

response = {
response: message.content.map(content => content.text).join("\n")
}
} else {
const model = "@cf/meta/llama-3.1-8b-instruct"
modelUsed = model

response = await c.env.AI.run(
model,
{
messages: [
...(notes.length ? [{ role: 'system', content: contextMessage }] : []),
{ role: 'system', content: systemPrompt },
{ role: 'user', content: question }
]
}
)
}

if (response) {
c.header('x-model-used', modelUsed)
return c.text(response.response)
} else {
return c.text("We were unable to generate output", 500)
}
})
```

Finally, you'll need to set the `ANTHROPIC_API_KEY` environment variable in your Workers application. You can do this by using `wrangler secret put`:

```sh
$ npx wrangler secret put ANTHROPIC_API_KEY
```

## 9. Deleting notes and vectors

If you no longer need a note, you can delete it from the database. Any time that you delete a note, you will also need to delete the corresponding vector from Vectorize. You can implement this by building a `DELETE /notes/:id` route in your `src/index.js` file:

Expand All @@ -346,7 +462,85 @@ app.delete("/notes/:id", async (c) => {
});
```

## 8. Deploy your project
## 10. Text splitting (optional)

For large pieces of text, it is recommended to split the text into smaller chunks. This allows LLMs to more effectively gather relevant context, without needing to retrieve large pieces of text.

To implement this, we'll add a new NPM package to our project, `@langchain/textsplitters':

```sh
npm install @cloudflare/textsplitters
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This @langchain ?

```

The `RecursiveCharacterTextSplitter` class provided by this package will split the text into smaller chunks. It can be customized to your liking, but the default config works in most cases:

```js
import { RecursiveCharacterTextSplitter } from "@langchain/textsplitters";

const text = "Some long piece of text...";

const splitter = new RecursiveCharacterTextSplitter({
// These can be customized to change the chunking size
// chunkSize: 1000,
// chunkOverlap: 200,
});

const output = await splitter.createDocuments([text]);
console.log(output) // [{ pageContent: 'Some long piece of text...' }]
```

To use this splitter, we'll update the workflow to split the text into smaller chunks. We'll then iterate over the chunks and run the rest of the workflow for each chunk of text:

```js
export class RAGWorkflow {
async run(event, step) {
const env = this.env
const { text } = event.payload;
let texts = await step.do('split text', async () => {
const splitter = new RecursiveCharacterTextSplitter();
const output = await splitter.createDocuments([text]);
return output.map(doc => doc.pageContent);
})

console.log("RecursiveCharacterTextSplitter generated ${texts.length} chunks")

for (const index in texts) {
const text = texts[index]
const record = await step.do(`create database record: ${index}/${texts.length}`, async () => {
const query = "INSERT INTO notes (text) VALUES (?) RETURNING *"

const { results } = await env.DATABASE.prepare(query)
.bind(text)
.run()

const record = results[0]
if (!record) throw new Error("Failed to create note")
return record;
})

const embedding = await step.do(`generate embedding: ${index}/${texts.length}`, async () => {
const embeddings = await env.AI.run('@cf/baai/bge-base-en-v1.5', { text: text })
const values = embeddings.data[0]
if (!values) throw new Error("Failed to generate vector embedding")
return values
})

await step.do(`insert vector: ${index}/${texts.length}`, async () => {
return env.VECTOR_INDEX.upsert([
{
id: record.id.toString(),
values: embedding,
}
]);
})
}
}
}
```

Now, when large pieces of text are submitted to the `/notes` endpoint, they will be split into smaller chunks, and each chunk will be processed by the workflow.

## 11. Deploy your project

If you did not deploy your Worker during [step 1](/workers/get-started/guide/#1-create-a-new-worker-project), deploy your Worker via Wrangler, to a `*.workers.dev` subdomain, or a [Custom Domain](/workers/configuration/routing/custom-domains/), if you have one configured. If you have not configured any subdomain or domain, Wrangler will prompt you during the publish process to set one up.

Expand Down Expand Up @@ -374,4 +568,4 @@ To do more:
- Explore [Examples](/workers/examples/) to experiment with copy and paste Worker code.
- Understand how Workers works in [Reference](/workers/reference/).
- Learn about Workers features and functionality in [Platform](/workers/platform/).
- Set up [Wrangler](/workers/wrangler/install-and-update/) to programmatically create, test, and deploy your Worker projects.
- Set up [Wrangler](/workers/wrangler/install-and-update/) to programmatically create, test, and deploy your Worker projects.
Loading