Skip to content

Commit

Permalink
update langchain code and notebook (#3201)
Browse files Browse the repository at this point in the history
  • Loading branch information
quchuyuan authored May 16, 2024
1 parent dd4238e commit 8742943
Show file tree
Hide file tree
Showing 24 changed files with 6,866 additions and 357 deletions.
292 changes: 54 additions & 238 deletions sdk/python/endpoints/online/llm/langchain/1_langchain_basic_deploy.ipynb

Large diffs are not rendered by default.

7 changes: 5 additions & 2 deletions sdk/python/endpoints/online/llm/langchain/requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,12 @@ azure-keyvault-secrets
azure-mgmt-keyvault
azure-keyvault
azure-search-documents==11.4.0b3
langchain==0.0.164
langchain
langchain-cli
openai==0.27.6
parse
requests
pyyaml
azure-cli
azure-cli
langserve
fastapi
1 change: 1 addition & 0 deletions sdk/python/endpoints/online/llm/src/langchain/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
__pycache__
21 changes: 21 additions & 0 deletions sdk/python/endpoints/online/llm/src/langchain/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
FROM python:3.11-slim

RUN pip install poetry==1.6.1

RUN poetry config virtualenvs.create false

WORKDIR /code

COPY ./pyproject.toml ./README.md ./poetry.lock* ./

COPY ./package[s] ./packages

RUN poetry install --no-interaction --no-ansi --no-root

COPY ./app ./app

RUN poetry install --no-interaction --no-ansi

EXPOSE 8080

CMD exec uvicorn app.server:app --host 0.0.0.0 --port 8080
83 changes: 83 additions & 0 deletions sdk/python/endpoints/online/llm/src/langchain/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,83 @@
# demo
This sample based on https://github.com/langchain-ai/langchain/blob/master/templates/openai-functions-agent. The modification we have are:
* Switch to use **Prompty**
* Use **AzureChatOpenAI** instead of **ChatOpenAI**
* Change tool to ElasticSearh

## Installation

Install the LangChain CLI if you haven't yet

```bash
pip install -U langchain-cli
```

## Adding packages

```bash
# adding packages from
# https://github.com/langchain-ai/langchain/tree/master/templates
langchain app add $PROJECT_NAME

# adding custom GitHub repo packages
langchain app add --repo $OWNER/$REPO
# or with whole git string (supports other git providers):
# langchain app add git+https://github.com/hwchase17/chain-of-verification

# with a custom api mount point (defaults to `/{package_name}`)
langchain app add $PROJECT_NAME --api_path=/my/custom/path/rag
```

Note: you remove packages by their api path

```bash
langchain app remove my/custom/path/rag
```

## Setup LangSmith (Optional)
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section


```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```

## Launch LangServe

```bash
langchain serve
```

## Running in Docker

This project folder includes a Dockerfile that allows you to easily build and host your LangServe app.

### Building the Image

To build the image, you simply:

```shell
docker build . -t my-langserve-app
```

If you tag your image with something other than `my-langserve-app`,
note it for use in the next step.

### Running the Image Locally

To run the image, you'll need to include any environment variables
necessary for your application.

In the below example, we inject the `OPENAI_API_KEY` environment
variable with the value set in my local environment
(`$OPENAI_API_KEY`)

We also expose port 8080 with the `-p 8080:8080` option.

```shell
docker run -e OPENAI_API_KEY=$OPENAI_API_KEY -p 8080:8080 my-langserve-app
```
Empty file.
24 changes: 24 additions & 0 deletions sdk/python/endpoints/online/llm/src/langchain/app/server.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
from fastapi import FastAPI
from fastapi.responses import RedirectResponse
from langserve import add_routes

app = FastAPI()

from dotenv import load_dotenv

load_dotenv(".env")


@app.get("/")
async def redirect_root_to_docs():
return RedirectResponse("/docs")


from openai_functions_agent import agent_executor as openai_functions_agent_chain

add_routes(app, openai_functions_agent_chain, path="/openai-functions-agent")

if __name__ == "__main__":
import uvicorn

uvicorn.run(app, host="0.0.0.0", port=8000)
Empty file.
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
MIT License

Copyright (c) 2023 LangChain, Inc.

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
Original file line number Diff line number Diff line change
@@ -0,0 +1,72 @@

# openai-functions-agent

This template creates an agent that uses OpenAI function calling to communicate its decisions on what actions to take.

This example creates an agent that can optionally look up information on the internet using Tavily's search engine.

## Environment Setup

The following environment variables need to be set:

Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.

Set the `TAVILY_API_KEY` environment variable to access Tavily.

## Usage

To use this package, you should first have the LangChain CLI installed:

```shell
pip install -U langchain-cli
```

To create a new LangChain project and install this as the only package, you can do:

```shell
langchain app new my-app --package openai-functions-agent
```

If you want to add this to an existing project, you can just run:

```shell
langchain app add openai-functions-agent
```

And add the following code to your `server.py` file:
```python
from openai_functions_agent import agent_executor as openai_functions_agent_chain

add_routes(app, openai_functions_agent_chain, path="/openai-functions-agent")
```

(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section

```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```

If you are inside this directory, then you can spin up a LangServe instance directly by:

```shell
langchain serve
```

This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)

We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/openai-functions-agent/playground](http://127.0.0.1:8000/openai-functions-agent/playground)

We can access the template from code with:

```python
from langserve.client import RemoteRunnable

runnable = RemoteRunnable("http://localhost:8000/openai-functions-agent")
```
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
from openai_functions_agent.agent import agent_executor

if __name__ == "__main__":
question = "who won the womens world cup in 2023?"
print(agent_executor.invoke({"input": question, "chat_history": []})) # noqa: T201
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
from openai_functions_agent.agent import agent_executor

__all__ = ["agent_executor"]
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
from typing import List, Tuple

from langchain.agents import AgentExecutor
from langchain.agents.format_scratchpad import format_to_openai_function_messages
from langchain.agents.output_parsers import OpenAIFunctionsAgentOutputParser
from langchain_core.messages import AIMessage, HumanMessage
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_openai import AzureChatOpenAI
import os
from langchain_prompty import create_chat_prompt
from azure.identity import DefaultAzureCredential, get_bearer_token_provider


# Define the arguments schema model
class SearchQueryArgs(BaseModel):
query: str = Field(..., example="What is the current state of the stock market?")


token_provider = get_bearer_token_provider(
DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default"
)

llm = AzureChatOpenAI(
azure_deployment=os.getenv("AZURE_OPENAI_DEPLOYMENT"),
azure_ad_token_provider=token_provider,
)

prompt = create_chat_prompt(
os.path.join(os.path.dirname(os.path.abspath(__file__)), "basic_chat.prompty")
)


def _format_chat_history(chat_history: List[Tuple[str, str]]):
buffer = []
for human, ai in chat_history:
buffer.append(HumanMessage(content=human))
buffer.append(AIMessage(content=ai))
return buffer


agent = (
{
"input": lambda x: x["input"],
"chat_history": lambda x: _format_chat_history(x["chat_history"]),
"agent_scratchpad": lambda x: format_to_openai_function_messages(
x["intermediate_steps"]
),
}
| prompt
| llm
| OpenAIFunctionsAgentOutputParser()
)


class AgentInput(BaseModel):
input: str
chat_history: List[Tuple[str, str]] = Field(
..., extra={"widget": {"type": "chat", "input": "input", "output": "output"}}
)


agent_executor = AgentExecutor(agent=agent, verbose=True, tools=[]).with_types(
input_type=AgentInput
)
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
---
name: Basic Prompt
template:
type: mustache
parser: prompty
description: A basic prompt that uses the GPT-3 chat API to answer questions
authors:
- author_1
- author_2
model:
api: chat
configuration:
azure_deployment: gpt-35-turbo
sample:
firstName: Jane
lastName: Doe
input: What is the meaning of life?
chat_history: []
---
system:
You are an AI assistant who helps people find information.
As the assistant, you answer questions briefly, succinctly,
and in a personable manner using markdown and even add some personal flair with appropriate emojis.

# Customer
You are helping {{firstName}} {{lastName}} to find answers to their questions.
Use their name to address them in your responses.

{{#chat_history}}
{{type}}:
{{content}}
{{/chat_history}}

user:
{{input}}
Loading

0 comments on commit 8742943

Please sign in to comment.