Skip to content

Commit

Permalink
Adds chat interface (#509)
Browse files Browse the repository at this point in the history
  • Loading branch information
jacoblee93 committed Mar 9, 2024
1 parent 5e59246 commit dc48c2e
Show file tree
Hide file tree
Showing 64 changed files with 16,949 additions and 6 deletions.
Binary file added .github/img/chat_playground.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
50 changes: 50 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -330,6 +330,56 @@ runnable and share a link with the configuration:
<img src="https://github.com/langchain-ai/langserve/assets/3205522/86ce9c59-f8e4-4d08-9fa3-62030e0f521d" width="50%"/>
</p>

## Chat playground

LangServe also makes a chat-focused playground available at `/my_runnable/chat_playground/`.
Unlike the general playground, only certain types of runnables are supported - the runnable's input schema must
be a `dict` with a single key, and that key's value must be a list of chat messages. The runnable
can return either an `AIMessage` or a string.

Here's an example route:

```python
# Declare a chain
prompt = ChatPromptTemplate.from_messages(
[
("system", "You are a helpful, professional assistant named Cob."),
MessagesPlaceholder(variable_name="messages"),
]
)

chain = prompt | ChatAnthropic(model="claude-2")


class InputChat(BaseModel):
"""Input for the chat endpoint."""

messages: List[Union[HumanMessage, AIMessage, SystemMessage]] = Field(
...,
description="The chat messages representing the current conversation.",
)


add_routes(
app,
chain.with_types(input_type=InputChat),
enable_feedback_endpoint=True,
enable_public_trace_link_endpoint=True,
)
```

If you are using LangSmith, you can also set `enable_feedback_endpoint=True` on your route to enable thumbs-up/thumbs-down buttons
after each message, and `enable_public_trace_link_endpoint=True` to add a button that creates a public traces for runs.

Here's an example with the above two options turned on:

<p align="center">
<img src="./.github/img/chat_playground.png" width="50%"/>
</p>

Note: If you enable public trace links, the internals of your chain will be exposed. We recommend only using this setting
for demos or testing.

## Legacy Chains

LangServe works with both Runnables (constructed
Expand Down
64 changes: 64 additions & 0 deletions examples/chat_playground/server.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
#!/usr/bin/env python
"""Example of a simple chatbot that just passes current conversation
state back and forth between server and client.
"""
from typing import List, Union

from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
from langchain.chat_models import ChatAnthropic
from langchain_core.messages import AIMessage, HumanMessage, SystemMessage
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder

from langserve import add_routes
from langserve.pydantic_v1 import BaseModel, Field

app = FastAPI(
title="LangChain Server",
version="1.0",
description="Spin up a simple api server using Langchain's Runnable interfaces",
)


# Set all CORS enabled origins
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
expose_headers=["*"],
)


# Declare a chain
prompt = ChatPromptTemplate.from_messages(
[
("system", "You are a helpful, professional assistant named Cob."),
MessagesPlaceholder(variable_name="messages"),
]
)

chain = prompt | ChatAnthropic(model="claude-2")


class InputChat(BaseModel):
"""Input for the chat endpoint."""

messages: List[Union[HumanMessage, AIMessage, SystemMessage]] = Field(
...,
description="The chat messages representing the current conversation.",
)


add_routes(
app,
chain.with_types(input_type=InputChat),
enable_feedback_endpoint=True,
enable_public_trace_link_endpoint=True,
)

if __name__ == "__main__":
import uvicorn

uvicorn.run(app, host="localhost", port=8000)
49 changes: 49 additions & 0 deletions langserve/api_handler.py
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,7 @@
from starlette.responses import JSONResponse, Response

from langserve.callbacks import AsyncEventAggregatorCallback, CallbackEventDict
from langserve.chat_playground import serve_chat_playground
from langserve.lzstring import LZString
from langserve.playground import serve_playground
from langserve.pydantic_v1 import BaseModel, Field, ValidationError, create_model
Expand Down Expand Up @@ -1366,6 +1367,54 @@ async def playground(
public_trace_link_enabled,
)

async def chat_playground(
self,
file_path: str,
request: Request,
*,
config_hash: str = "",
server_config: Optional[RunnableConfig] = None,
) -> Any:
"""Return the playground of the runnable."""
with _with_validation_error_translation():
user_provided_config = await _unpack_request_config(
config_hash,
config_keys=self._config_keys,
model=self._ConfigPayload,
request=request,
# Do not use per request config modifier for output schema
# since it's unclear why it would make sense to modify
# this using a per request config modifier.
# If this is needed, for some reason please file an issue explaining
# the user case.
per_req_config_modifier=None,
server_config=server_config,
)

config = _update_config_with_defaults(
self._run_name, user_provided_config, request
)

chat_playground_url = (
request.scope.get("root_path", "").rstrip("/")
+ self._base_url
+ "/chat_playground"
)
feedback_enabled = tracing_is_enabled() and self._enable_feedback_endpoint
public_trace_link_enabled = (
tracing_is_enabled() and self._enable_public_trace_link_endpoint
)

return await serve_chat_playground(
self._runnable.with_config(config),
self._runnable.with_config(config).input_schema,
self._config_keys,
chat_playground_url,
file_path,
feedback_enabled,
public_trace_link_enabled,
)

async def create_feedback(
self, feedback_create_req: FeedbackCreateRequest
) -> Feedback:
Expand Down
98 changes: 98 additions & 0 deletions langserve/chat_playground.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,98 @@
import json
import mimetypes
import os
from string import Template
from typing import Sequence, Type

from fastapi.responses import Response
from langchain.schema.runnable import Runnable

from langserve.pydantic_v1 import BaseModel


class ChatPlaygroundTemplate(Template):
delimiter = "____"


def _get_mimetype(path: str) -> str:
"""Get mimetype for file.
Custom implementation of mimetypes.guess_type that
uses the file extension to determine the mimetype for some files.
This is necessary due to: https://bugs.python.org/issue43975
Resolves issue: https://github.com/langchain-ai/langserve/issues/245
Args:
path (str): Path to file
Returns:
str: Mimetype of file
"""
try:
file_extension = path.lower().split(".")[-1]
except IndexError:
return mimetypes.guess_type(path)[0]

if file_extension == "js":
return "application/javascript"
elif file_extension == "css":
return "text/css"
elif file_extension in ["htm", "html"]:
return "text/html"

# If the file extension is not one of the specified ones,
# use the default guess method
mime_type = mimetypes.guess_type(path)[0]
return mime_type


async def serve_chat_playground(
runnable: Runnable,
input_schema: Type[BaseModel],
config_keys: Sequence[str],
base_url: str,
file_path: str,
feedback_enabled: bool,
public_trace_link_enabled: bool,
) -> Response:
"""Serve the playground."""
local_file_path = os.path.abspath(
os.path.join(
os.path.dirname(__file__),
"./chat_playground/dist",
file_path or "index.html",
)
)

base_dir = os.path.abspath(
os.path.join(os.path.dirname(__file__), "./chat_playground/dist")
)

if base_dir != os.path.commonpath((base_dir, local_file_path)):
return Response("Not Found", status_code=404)
try:
with open(local_file_path, encoding="utf-8") as f:
mime_type = _get_mimetype(local_file_path)
if mime_type in ("text/html", "text/css", "application/javascript"):
response = ChatPlaygroundTemplate(f.read()).substitute(
LANGSERVE_BASE_URL=base_url[1:]
if base_url.startswith("/")
else base_url,
LANGSERVE_CONFIG_SCHEMA=json.dumps(
runnable.config_schema(include=config_keys).schema()
),
LANGSERVE_INPUT_SCHEMA=json.dumps(input_schema.schema()),
LANGSERVE_FEEDBACK_ENABLED=json.dumps(
"true" if feedback_enabled else "false"
),
LANGSERVE_PUBLIC_TRACE_LINK_ENABLED=json.dumps(
"true" if public_trace_link_enabled else "false"
),
)
else:
response = f.buffer.read()
except FileNotFoundError:
return Response("Not Found", status_code=404)

return Response(response, media_type=mime_type)
18 changes: 18 additions & 0 deletions langserve/chat_playground/.eslintrc.cjs
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
module.exports = {
root: true,
env: { browser: true, es2020: true },
extends: [
'eslint:recommended',
'plugin:@typescript-eslint/recommended',
'plugin:react-hooks/recommended',
],
ignorePatterns: ['dist', '.eslintrc.cjs'],
parser: '@typescript-eslint/parser',
plugins: ['react-refresh'],
rules: {
'react-refresh/only-export-components': [
'warn',
{ allowConstantExport: true },
],
},
}
28 changes: 28 additions & 0 deletions langserve/chat_playground/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
# Logs
logs
*.log
npm-debug.log*
yarn-debug.log*
yarn-error.log*
pnpm-debug.log*
lerna-debug.log*

node_modules
dist
dist-ssr
*.local

# Editor directories and files
.vscode/*
!.vscode/extensions.json
.idea
.DS_Store
*.suo
*.ntvs*
*.njsproj
*.sln
*.sw?

.yarn

!dist
27 changes: 27 additions & 0 deletions langserve/chat_playground/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
# React + TypeScript + Vite

This template provides a minimal setup to get React working in Vite with HMR and some ESLint rules.

Currently, two official plugins are available:

- [@vitejs/plugin-react](https://github.com/vitejs/vite-plugin-react/blob/main/packages/plugin-react/README.md) uses [Babel](https://babeljs.io/) for Fast Refresh
- [@vitejs/plugin-react-swc](https://github.com/vitejs/vite-plugin-react-swc) uses [SWC](https://swc.rs/) for Fast Refresh

## Expanding the ESLint configuration

If you are developing a production application, we recommend updating the configuration to enable type aware lint rules:

- Configure the top-level `parserOptions` property like this:

```js
parserOptions: {
ecmaVersion: 'latest',
sourceType: 'module',
project: ['./tsconfig.json', './tsconfig.node.json'],
tsconfigRootDir: __dirname,
},
```

- Replace `plugin:@typescript-eslint/recommended` to `plugin:@typescript-eslint/recommended-type-checked` or `plugin:@typescript-eslint/strict-type-checked`
- Optionally add `plugin:@typescript-eslint/stylistic-type-checked`
- Install [eslint-plugin-react](https://github.com/jsx-eslint/eslint-plugin-react) and add `plugin:react/recommended` & `plugin:react/jsx-runtime` to the `extends` list
1 change: 1 addition & 0 deletions langserve/chat_playground/dist/assets/index-a69b1f28.css

Large diffs are not rendered by default.

Loading

0 comments on commit dc48c2e

Please sign in to comment.