Skip to content

Commit

Permalink
Adds OpenAI tools agent example (#3216)
Browse files Browse the repository at this point in the history
* Adds OpenAI tools agent example

* Polish

* Fix lint

* Restructure
  • Loading branch information
jacoblee93 committed Nov 9, 2023
1 parent 01a98c4 commit 267ffbd
Show file tree
Hide file tree
Showing 23 changed files with 443 additions and 13 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ sidebar_position: 0

# OpenAI functions

Certain OpenAI models (like `gpt-3.5-turbo` and `gpt-4`) have been fine-tuned to detect when a function should to be called and respond with the inputs that should be passed to the function.
Certain OpenAI models (like `gpt-3.5-turbo` and `gpt-4`) have been fine-tuned to detect when a function should be called and respond with the inputs that should be passed to the function.
In an API call, you can describe functions and have the model intelligently choose to output a JSON object containing arguments to call those functions.
The goal of the OpenAI Function APIs is to more reliably return valid and useful function calls than a generic text completion or chat API.

Expand All @@ -22,7 +22,7 @@ Must be used with an [OpenAI Functions](https://platform.openai.com/docs/guides/

# With LCEL

In this example we'll use LCEL to construct a highly customizable agent that is given two tools: search and calculator.
In this example we'll use LCEL to construct a customizable agent that is given two tools: search and calculator.
We'll then pull in a prompt template from the [LangChainHub](https://smith.langchain.com/hub) and pass that to our runnable agent.
Lastly we'll use the default OpenAI functions output parser `OpenAIFunctionsAgentOutputParser`.
This output parser contains a method `parseAIMessage` which when provided with a message, either returns an instance of `FunctionsAgentAction` if there is another action to be taken my the agent, or `AgentFinish` if the agent has completed its objective.
Expand Down
131 changes: 131 additions & 0 deletions docs/docs/modules/agents/agent_types/openai_tools_agent.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,131 @@
---
hide_table_of_contents: true
sidebar_position: 1
---

# OpenAI tool calling

:::tip Compatibility
Tool calling is new and only available on [OpenAI's latest models](https://platform.openai.com/docs/guides/function-calling).
:::

OpenAI's latest `gpt-3.5-turbo-1106` and `gpt-4-1106-preview` models have been fine-tuned to detect when one or more tools should be called to gather sufficient information
to answer the initial query, and respond with the inputs that should be passed to those tools.

While the goal of more reliably returning valid and useful function calls is the same as the functions agent, the ability to return multiple tools at once results in
both fewer roundtrips for complex questions.

The OpenAI Tools Agent is designed to work with these models.

import CodeBlock from "@theme/CodeBlock";
import RunnableExample from "@examples/agents/openai_tools_runnable.ts";

# Usage

In this example we'll use LCEL to construct a customizable agent with a mocked weather tool and a calculator.

The basic flow is this:

1. Define the tools the agent will be able to call. You can use [OpenAI's tool syntax](https://platform.openai.com/docs/guides/function-calling), or LangChain tool instances as shown below.
2. Initialize our model and bind those tools as arguments.
3. Define a function that formats any previous agent steps as messages. The agent will pass those back to OpenAI for the next agent iteration.
4. Create a `RunnableSequence` that will act as the agent. We use a specialized output parser to extract any tool calls from the model's output.
5. Initialize an `AgentExecutor` with the agent and the tools to execute the agent on a loop.
6. Run the `AgentExecutor` and see the output.

Here's how it looks:

<CodeBlock language="typescript">{RunnableExample}</CodeBlock>

You can check out this example trace for an inspectable view of the steps taken to answer the question: https://smith.langchain.com/public/2bbffb7d-4f9d-47ad-90be-09910e5b4b34/r

## Adding memory

We can also use memory to save our previous agent input/outputs, and pass it through to each agent iteration.
Using memory can help give the agent better context on past interactions, which can lead to more accurate responses beyond what the `agent_scratchpad` can do.

Adding memory only requires a few changes to the above example.

First, import and instantiate your memory class, in this example we'll use `BufferMemory`.

```typescript
import { BufferMemory } from "langchain/memory";
```

```typescript
const memory = new BufferMemory({
memoryKey: "history", // The object key to store the memory under
inputKey: "question", // The object key for the input
outputKey: "answer", // The object key for the output
returnMessages: true,
});
```

Then, update your prompt to include another `MessagesPlaceholder`. This time we'll be passing in the `chat_history` variable from memory.

```typescript
const prompt = ChatPromptTemplate.fromMessages([
["ai", "You are a helpful assistant"],
["human", "{input}"],
new MessagesPlaceholder("agent_scratchpad"),
new MessagesPlaceholder("chat_history"),
]);
```

Next, inside your `RunnableSequence` add a field for loading the `chat_history` from memory.

```typescript
const runnableAgent = RunnableSequence.from([
{
input: (i: { input: string; steps: AgentStep[] }) => i.input,
agent_scratchpad: (i: { input: string; steps: AgentStep[] }) =>
formatAgentSteps(i.steps),
// Load memory here
chat_history: async (_: { input: string; steps: AgentStep[] }) => {
const { history } = await memory.loadMemoryVariables({});
return history;
},
},
prompt,
modelWithTools,
new OpenAIFunctionsAgentOutputParser(),
]);
```

Finally we can call the agent, and save the output after the response is returned.

```typescript
const query = "What is the weather in New York?";
console.log(`Calling agent executor with query: ${query}`);
const result = await executor.call({
input: query,
});
console.log(result);
/*
Calling agent executor with query: What is the weather in New York?
{
output: 'The current weather in New York is sunny with a temperature of 66 degrees Fahrenheit. The humidity is at 54% and the wind is blowing at 6 mph. There is 0% chance of precipitation.'
}
*/

// Save the result and initial input to memory
await memory.saveContext(
{
question: query,
},
{
answer: result.output,
}
);

const query2 = "Do I need a jacket?";
const result2 = await executor.call({
input: query2,
});
console.log(result2);
/*
{
output: 'Based on the current weather in New York, you may not need a jacket. However, if you feel cold easily or will be outside for a long time, you might want to bring a light jacket just in case.'
}
*/
```
1 change: 1 addition & 0 deletions environment_tests/test-exports-bun/src/entrypoints.js
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@ export * from "langchain/load/serializable";
export * from "langchain/agents";
export * from "langchain/agents/toolkits";
export * from "langchain/agents/format_scratchpad";
export * from "langchain/agents/format_scratchpad/openai_tools";
export * from "langchain/agents/format_scratchpad/log";
export * from "langchain/agents/format_scratchpad/xml";
export * from "langchain/agents/format_scratchpad/log_to_message";
Expand Down
1 change: 1 addition & 0 deletions environment_tests/test-exports-cf/src/entrypoints.js
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@ export * from "langchain/load/serializable";
export * from "langchain/agents";
export * from "langchain/agents/toolkits";
export * from "langchain/agents/format_scratchpad";
export * from "langchain/agents/format_scratchpad/openai_tools";
export * from "langchain/agents/format_scratchpad/log";
export * from "langchain/agents/format_scratchpad/xml";
export * from "langchain/agents/format_scratchpad/log_to_message";
Expand Down
1 change: 1 addition & 0 deletions environment_tests/test-exports-cjs/src/entrypoints.js
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@ const load_serializable = require("langchain/load/serializable");
const agents = require("langchain/agents");
const agents_toolkits = require("langchain/agents/toolkits");
const agents_format_scratchpad = require("langchain/agents/format_scratchpad");
const agents_format_scratchpad_openai_tools = require("langchain/agents/format_scratchpad/openai_tools");
const agents_format_scratchpad_log = require("langchain/agents/format_scratchpad/log");
const agents_format_scratchpad_xml = require("langchain/agents/format_scratchpad/xml");
const agents_format_scratchpad_log_to_message = require("langchain/agents/format_scratchpad/log_to_message");
Expand Down
1 change: 1 addition & 0 deletions environment_tests/test-exports-esbuild/src/entrypoints.js
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@ import * as load_serializable from "langchain/load/serializable";
import * as agents from "langchain/agents";
import * as agents_toolkits from "langchain/agents/toolkits";
import * as agents_format_scratchpad from "langchain/agents/format_scratchpad";
import * as agents_format_scratchpad_openai_tools from "langchain/agents/format_scratchpad/openai_tools";
import * as agents_format_scratchpad_log from "langchain/agents/format_scratchpad/log";
import * as agents_format_scratchpad_xml from "langchain/agents/format_scratchpad/xml";
import * as agents_format_scratchpad_log_to_message from "langchain/agents/format_scratchpad/log_to_message";
Expand Down
1 change: 1 addition & 0 deletions environment_tests/test-exports-esm/src/entrypoints.js
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@ import * as load_serializable from "langchain/load/serializable";
import * as agents from "langchain/agents";
import * as agents_toolkits from "langchain/agents/toolkits";
import * as agents_format_scratchpad from "langchain/agents/format_scratchpad";
import * as agents_format_scratchpad_openai_tools from "langchain/agents/format_scratchpad/openai_tools";
import * as agents_format_scratchpad_log from "langchain/agents/format_scratchpad/log";
import * as agents_format_scratchpad_xml from "langchain/agents/format_scratchpad/xml";
import * as agents_format_scratchpad_log_to_message from "langchain/agents/format_scratchpad/log_to_message";
Expand Down
1 change: 1 addition & 0 deletions environment_tests/test-exports-vercel/src/entrypoints.js
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@ export * from "langchain/load/serializable";
export * from "langchain/agents";
export * from "langchain/agents/toolkits";
export * from "langchain/agents/format_scratchpad";
export * from "langchain/agents/format_scratchpad/openai_tools";
export * from "langchain/agents/format_scratchpad/log";
export * from "langchain/agents/format_scratchpad/xml";
export * from "langchain/agents/format_scratchpad/log_to_message";
Expand Down
1 change: 1 addition & 0 deletions environment_tests/test-exports-vite/src/entrypoints.js
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@ export * from "langchain/load/serializable";
export * from "langchain/agents";
export * from "langchain/agents/toolkits";
export * from "langchain/agents/format_scratchpad";
export * from "langchain/agents/format_scratchpad/openai_tools";
export * from "langchain/agents/format_scratchpad/log";
export * from "langchain/agents/format_scratchpad/xml";
export * from "langchain/agents/format_scratchpad/log_to_message";
Expand Down
4 changes: 2 additions & 2 deletions examples/src/agents/openai_runnable.ts
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ const prompt = ChatPromptTemplate.fromMessages([
* Here we're using the `formatToOpenAIFunction` util function
* to format our tools into the proper schema for OpenAI functions.
*/
const modelWithTools = model.bind({
const modelWithFunctions = model.bind({
functions: [...tools.map((tool) => formatToOpenAIFunction(tool))],
});
/**
Expand Down Expand Up @@ -68,7 +68,7 @@ const runnableAgent = RunnableSequence.from([
formatAgentSteps(i.steps),
},
prompt,
modelWithTools,
modelWithFunctions,
new OpenAIFunctionsAgentOutputParser(),
]);
/** Pass the runnable along with the tools to create the Agent Executor */
Expand Down
73 changes: 73 additions & 0 deletions examples/src/agents/openai_tools_runnable.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,73 @@
import { z } from "zod";
import { ChatOpenAI } from "langchain/chat_models/openai";
import { DynamicStructuredTool, formatToOpenAITool } from "langchain/tools";
import { Calculator } from "langchain/tools/calculator";
import { ChatPromptTemplate, MessagesPlaceholder } from "langchain/prompts";
import { RunnableSequence } from "langchain/schema/runnable";
import { AgentExecutor } from "langchain/agents";
import { formatToOpenAIToolMessages } from "langchain/agents/format_scratchpad/openai_tools";
import {
OpenAIToolsAgentOutputParser,
type ToolsAgentStep,
} from "langchain/agents/openai/output_parser";

const model = new ChatOpenAI({
modelName: "gpt-3.5-turbo-1106",
temperature: 0,
});

const weatherTool = new DynamicStructuredTool({
name: "get_current_weather",
description: "Get the current weather in a given location",
func: async ({ location }) => {
if (location.toLowerCase().includes("tokyo")) {
return JSON.stringify({ location, temperature: "10", unit: "celsius" });
} else if (location.toLowerCase().includes("san francisco")) {
return JSON.stringify({
location,
temperature: "72",
unit: "fahrenheit",
});
} else {
return JSON.stringify({ location, temperature: "22", unit: "celsius" });
}
},
schema: z.object({
location: z.string().describe("The city and state, e.g. San Francisco, CA"),
unit: z.enum(["celsius", "fahrenheit"]),
}),
});

const tools = [new Calculator(), weatherTool];

// Convert to OpenAI tool format
const modelWithTools = model.bind({ tools: tools.map(formatToOpenAITool) });

const prompt = ChatPromptTemplate.fromMessages([
["ai", "You are a helpful assistant"],
["human", "{input}"],
new MessagesPlaceholder("agent_scratchpad"),
]);

const runnableAgent = RunnableSequence.from([
{
input: (i: { input: string; steps: ToolsAgentStep[] }) => i.input,
agent_scratchpad: (i: { input: string; steps: ToolsAgentStep[] }) =>
formatToOpenAIToolMessages(i.steps),
},
prompt,
modelWithTools,
new OpenAIToolsAgentOutputParser(),
]).withConfig({ runName: "OpenAIToolsAgent" });

const executor = AgentExecutor.fromAgentAndTools({
agent: runnableAgent,
tools,
});

const res = await executor.invoke({
input:
"What is the sum of the current temperature in San Francisco, New York, and Tokyo?",
});

console.log(res);
3 changes: 3 additions & 0 deletions langchain/.gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,9 @@ agents/toolkits/sql.d.ts
agents/format_scratchpad.cjs
agents/format_scratchpad.js
agents/format_scratchpad.d.ts
agents/format_scratchpad/openai_tools.cjs
agents/format_scratchpad/openai_tools.js
agents/format_scratchpad/openai_tools.d.ts
agents/format_scratchpad/log.cjs
agents/format_scratchpad/log.js
agents/format_scratchpad/log.d.ts
Expand Down
10 changes: 9 additions & 1 deletion langchain/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,9 @@
"agents/format_scratchpad.cjs",
"agents/format_scratchpad.js",
"agents/format_scratchpad.d.ts",
"agents/format_scratchpad/openai_tools.cjs",
"agents/format_scratchpad/openai_tools.js",
"agents/format_scratchpad/openai_tools.d.ts",
"agents/format_scratchpad/log.cjs",
"agents/format_scratchpad/log.js",
"agents/format_scratchpad/log.d.ts",
Expand Down Expand Up @@ -1364,7 +1367,7 @@
"langchainhub": "~0.0.6",
"langsmith": "~0.0.48",
"ml-distance": "^4.0.0",
"openai": "^4.16.1",
"openai": "^4.17.0",
"openapi-types": "^12.1.3",
"p-queue": "^6.6.2",
"p-retry": "4",
Expand Down Expand Up @@ -1431,6 +1434,11 @@
"import": "./agents/format_scratchpad.js",
"require": "./agents/format_scratchpad.cjs"
},
"./agents/format_scratchpad/openai_tools": {
"types": "./agents/format_scratchpad/openai_tools.d.ts",
"import": "./agents/format_scratchpad/openai_tools.js",
"require": "./agents/format_scratchpad/openai_tools.cjs"
},
"./agents/format_scratchpad/log": {
"types": "./agents/format_scratchpad/log.d.ts",
"import": "./agents/format_scratchpad/log.js",
Expand Down
1 change: 1 addition & 0 deletions langchain/scripts/create-entrypoints.js
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@ const entrypoints = {
"agents/toolkits/aws_sfn": "agents/toolkits/aws_sfn",
"agents/toolkits/sql": "agents/toolkits/sql/index",
"agents/format_scratchpad": "agents/format_scratchpad/openai_functions",
"agents/format_scratchpad/openai_tools": "agents/format_scratchpad/openai_tools",
"agents/format_scratchpad/log": "agents/format_scratchpad/log",
"agents/format_scratchpad/xml": "agents/format_scratchpad/xml",
"agents/format_scratchpad/log_to_message":
Expand Down
24 changes: 24 additions & 0 deletions langchain/src/agents/format_scratchpad/openai_tools.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
import type { ToolsAgentStep } from "../openai/output_parser.js";
import {
type BaseMessage,
ToolMessage,
AIMessage,
} from "../../schema/index.js";

export function formatToOpenAIToolMessages(
steps: ToolsAgentStep[]
): BaseMessage[] {
return steps.flatMap(({ action, observation }) => {
if ("messageLog" in action && action.messageLog !== undefined) {
const log = action.messageLog as BaseMessage[];
return log.concat(
new ToolMessage({
content: observation,
tool_call_id: action.toolCallId,
})
);
} else {
return [new AIMessage(action.log)];
}
});
}

0 comments on commit 267ffbd

Please sign in to comment.