Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add shiny.ui.Chat #1453

Merged
merged 101 commits into from
Jul 3, 2024
Merged
Show file tree
Hide file tree
Changes from 67 commits
Commits
Show all changes
101 commits
Select commit Hold shift + click to select a range
63aa066
wip shiny.chat experiments
cpsievert Apr 22, 2024
8894244
Use custom messages for streaming
cpsievert Apr 22, 2024
807b034
Clean up
cpsievert Apr 23, 2024
38ed666
Switch from input to output binding
cpsievert Apr 25, 2024
0ed3b56
Make streaming easier
cpsievert May 17, 2024
ac547e2
Improved generic interface
cpsievert May 23, 2024
d4bb411
Avoid a binding altogether
cpsievert May 23, 2024
0e57c89
Use strategy pattern to normalize messages and also a way to reigster…
cpsievert May 24, 2024
61f3139
Add ollama support and example
cpsievert May 29, 2024
76381d5
Sanitize HTML before putting it in a message
cpsievert May 29, 2024
c1dcf02
Require an active session when initializing Chat(), add chat_ui() for…
cpsievert May 29, 2024
63e49a7
Introduce placehodler message concept
cpsievert May 29, 2024
f6b75b5
First pass at code highlighting and copy to clipboard
cpsievert May 30, 2024
69768a9
Display user messages differently; refactor
cpsievert May 30, 2024
9f19a24
Add support for AsyncIterable in append_message_stream(); add nonbloc…
cpsievert May 31, 2024
bf09580
Make input autoresize
cpsievert May 31, 2024
04ad941
Make appending of message streams non-blocking by default
cpsievert May 31, 2024
98eb35a
Make sure message content doesn't blow out of its container
cpsievert Jun 4, 2024
1869a1c
More UI improvements
cpsievert Jun 4, 2024
3dfb275
Refactor; better code highlighting
cpsievert Jun 4, 2024
370ed19
Leverage inheritance in a more sane way
cpsievert Jun 5, 2024
e12cca0
Add langchain BaseChatModel support
cpsievert Jun 6, 2024
532b4f2
Updates for recent anthropic release; be more careful not to error ou…
cpsievert Jun 6, 2024
8abe0e3
Better error handling
cpsievert Jun 6, 2024
112f897
Various improvements; address some feedback
cpsievert Jun 7, 2024
0ae748e
Allow user to type while receiving a message (but prevent sending unt…
cpsievert Jun 7, 2024
77d723d
Use .ui() method to display; move initial messages to constructor
cpsievert Jun 10, 2024
e61e19e
Move more UI logic to client
cpsievert Jun 10, 2024
91aa45c
Add user_input_transformer; don't display system messages; separate s…
cpsievert Jun 10, 2024
2a424c9
Flesh out docstrings; few other improvements
cpsievert Jun 10, 2024
e9deba0
Simplify/improve highlight logic
cpsievert Jun 10, 2024
7d7c006
Add recipes example
cpsievert Jun 10, 2024
eaba0be
Separate concerns between user/assistant message components
cpsievert Jun 10, 2024
dcf29e8
Add assistant_response_transformer
cpsievert Jun 11, 2024
91e36d0
Move on_error back to on_user_submit
cpsievert Jun 11, 2024
2598c10
Make user input id accessible (things like shiny_validate might want it)
cpsievert Jun 11, 2024
76840b1
First pass at imposing a token limit
cpsievert Jun 11, 2024
d7ea9cb
Clean up some ui API details
cpsievert Jun 12, 2024
ec68450
Refactor/improve message types
cpsievert Jun 12, 2024
14df00b
Fix get_messages() logic; embrace FullMessage inside internal state
cpsievert Jun 12, 2024
a2c37da
wip provide a default tokenzier; remember pre&post transform response…
cpsievert Jun 13, 2024
065bdd0
Add set_user_input() method; improve styling; other refactoring/fixes
cpsievert Jun 14, 2024
343dbc8
Use subclassing to provide transforms (this way the transform also ha…
cpsievert Jun 14, 2024
65a4dbc
Revert subclass transforms; support returning a ChatMessage from tran…
cpsievert Jun 17, 2024
f12423a
Fix error handling in append_message_stream()
cpsievert Jun 18, 2024
bea3634
Show actual errors when we have proof of errors not needing sanitization
cpsievert Jun 18, 2024
458ce27
Merge branch 'main' into chat-llms
cpsievert Jun 18, 2024
9beef48
Merge branch 'main' into chat-llms
cpsievert Jun 18, 2024
4e7ef05
Tweak/refactor styles; add dark mode example
cpsievert Jun 19, 2024
b756a0a
Prevent chat effects from accumulating; clean-up multi-provider example
cpsievert Jun 19, 2024
2a2d1c6
Re-organize examples
cpsievert Jun 19, 2024
a753022
wip enqueue pending messages while streaming to ensure FIFO
cpsievert Jun 20, 2024
776293e
DRY
cpsievert Jun 20, 2024
e4b58b2
Improve transform API
cpsievert Jun 25, 2024
84d4aad
Generate typestubs
cpsievert Jun 25, 2024
1b77b4d
Debug
cpsievert Jun 25, 2024
26ffe2a
Debug
cpsievert Jun 25, 2024
4f6b22d
Fixes
cpsievert Jun 25, 2024
e6efae6
Merge branch 'main' into chat-llms
cpsievert Jun 25, 2024
18466f0
More fixes
cpsievert Jun 25, 2024
1553fb2
More fixes
cpsievert Jun 25, 2024
08f9f16
Quote more types
cpsievert Jun 25, 2024
870c47f
Try requiring latest google-generativeai
cpsievert Jun 25, 2024
78a5717
Move chat packages to dev not test
cpsievert Jun 25, 2024
4a7ba8a
Add workarounds for google-generativeai not supporting Python 3.8
cpsievert Jun 25, 2024
bb919e3
Get rid of typestubs
cpsievert Jun 25, 2024
0633240
Merge branch 'main' into chat-llms
cpsievert Jun 25, 2024
03b03c7
Add requirements for recipes app to dev
cpsievert Jun 26, 2024
865c4b7
Add some lead-in commentary to each example
cpsievert Jun 26, 2024
53f66ff
Add a couple enterprise examples
cpsievert Jun 27, 2024
a499c2a
Accumulate and flush pending messages server-side instead of client-side
cpsievert Jun 28, 2024
b4e3e36
get_user_message -> get_user_input; add transform parameter
cpsievert Jun 28, 2024
cdb2695
Make get_user_input() sync not async
cpsievert Jun 28, 2024
cf19247
Fix handling of None return values in transform_user_input
cpsievert Jun 28, 2024
4f22d1a
First pass at adding tests
cpsievert Jun 28, 2024
fbb8344
Fix typing issue
cpsievert Jun 28, 2024
f95e07b
Mock an API key
cpsievert Jun 28, 2024
ac43f59
Fix more typing issues
cpsievert Jun 29, 2024
2843e38
Require anthropic 0.28
cpsievert Jun 29, 2024
634f547
Fix check for anthropic type for older Python versions
cpsievert Jun 29, 2024
47275a4
Revert anthropic requirement
cpsievert Jun 29, 2024
5338c41
Merge branch 'main' into chat-llms
cpsievert Jun 29, 2024
db56942
More tests
cpsievert Jul 1, 2024
030b695
Doc improvements
cpsievert Jul 1, 2024
297ecfd
Recommend dotenv for managing credentials
cpsievert Jul 1, 2024
347b842
Accumulate message chunks before transform, store, and send
cpsievert Jul 1, 2024
aa9218b
Tokenize the pre-transformed response; make get_user_input() sync for…
cpsievert Jul 1, 2024
a760241
Improved TransformedMessage type/logic; pass accumulated message to t…
cpsievert Jul 2, 2024
00f7785
Leverage stored messages inside .get_user_input()
cpsievert Jul 2, 2024
bc30c87
Merge branch 'main' into chat-llms
cpsievert Jul 2, 2024
7e5b09b
.get_messages() -> .messages(); .get_user_input() -> .user_input()
cpsievert Jul 2, 2024
fc891f7
Fix typing compatibility
cpsievert Jul 2, 2024
b447e65
Fix and add more tests
cpsievert Jul 2, 2024
bab0d92
Fix type and typo
cpsievert Jul 2, 2024
4ea1784
Address feedback
cpsievert Jul 2, 2024
d8b0a7f
load_dotenv() returns True
cpsievert Jul 2, 2024
bf88638
Update tests
cpsievert Jul 2, 2024
849710b
Docstring improvements
cpsievert Jul 3, 2024
11f9dae
Merge branch 'main' into chat-llms
cpsievert Jul 3, 2024
cf904a1
Remove runtime check in .ui() for Express mode
cpsievert Jul 3, 2024
5ab0b32
Update changelog
cpsievert Jul 3, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
49 changes: 49 additions & 0 deletions examples/chat/RAG/recipes/app.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
# from langchain_openai import ChatOpenAI
from openai import AsyncOpenAI
from utils import recipe_prompt, scrape_page_with_url

from shiny.express import ui

ui.page_opts(
title="Recipe Extractor Chat",
fillable=True,
fillable_mobile=True,
)

# Initialize the chat (with a system prompt and starting message)
chat = ui.Chat(
id="chat",
messages=[
{"role": "system", "content": recipe_prompt},
{
"role": "assistant",
"content": "Hello! I'm a recipe extractor. Please enter a URL to a recipe page. For example, <https://www.thechunkychef.com/epic-dry-rubbed-baked-chicken-wings/>",
},
],
)

chat.ui(placeholder="Enter a recipe URL...")

llm = AsyncOpenAI()


# A function to transform user input
# Note that, if an exception occurs, the function will return a message to the user
# "short-circuiting" the conversation and asking the user to try again.
@chat.transform_user_input
async def try_scrape_page(input: str) -> str | None:
try:
return await scrape_page_with_url(input)
except Exception:
await chat.append_message(
"I'm sorry, I couldn't extract content from that URL. Please try again. "
)
return None


@chat.on_user_submit
async def _():
response = await llm.chat.completions.create(
model="gpt-4o", messages=chat.get_messages(), temperature=0, stream=True
)
await chat.append_message_stream(response)
106 changes: 106 additions & 0 deletions examples/chat/RAG/recipes/utils.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,106 @@
import aiohttp
cpsievert marked this conversation as resolved.
Show resolved Hide resolved
from bs4 import BeautifulSoup

recipe_prompt = """
You are RecipeExtractorGPT.
Your goal is to extract recipe content from text and return a JSON representation of the useful information.

The JSON should be structured like this:

```
{
"title": "Scrambled eggs",
"ingredients": {
"eggs": "2",
"butter": "1 tbsp",
"milk": "1 tbsp",
"salt": "1 pinch"
},
"directions": [
"Beat eggs, milk, and salt together in a bowl until thoroughly combined.",
"Heat butter in a large skillet over medium-high heat. Pour egg mixture into the hot skillet; cook and stir until eggs are set, 3 to 5 minutes."
],
"servings": 2,
"prep_time": 5,
"cook_time": 5,
"total_time": 10,
"tags": [
"breakfast",
"eggs",
"scrambled"
],
"source": "https://recipes.com/scrambled-eggs/",
}
```

The user will provide text content from a web page.
It is not very well structured, but the recipe is in there.
Please look carefully for the useful information about the recipe.
IMPORTANT: Return the result as JSON in a Markdown code block surrounded with three backticks!
"""


async def scrape_page_with_url(url: str, max_length: int = 14000) -> str:
"""
Given a URL, scrapes the web page and return the contents. This also adds adds the
URL to the beginning of the text.

Parameters
----------
url:
The URL to scrape
max_length:
Max length of recipe text to process. This is to prevent the model from running
out of tokens. 14000 bytes translates to approximately 3200 tokens.
"""
contents = await scrape_page(url)
# Trim the string so that the prompt and reply will fit in the token limit.. It
# would be better to trim by tokens, but that requires using the tiktoken package,
# which can be very slow to load when running on containerized servers, because it
# needs to download the model from the internet each time the container starts.
contents = contents[:max_length]
return f"From: {url}\n\n" + contents


async def scrape_page(url: str) -> str:
# Asynchronously send an HTTP request to the URL.
async with aiohttp.ClientSession() as session:
async with session.get(url) as response:
if response.status != 200:
raise aiohttp.ClientError(f"An error occurred: {response.status}")
html = await response.text()

# Parse the HTML content using BeautifulSoup
soup = BeautifulSoup(html, "html.parser")

# Remove script and style elements
for script in soup(["script", "style"]):
script.decompose()

# List of element IDs or class names to remove
elements_to_remove = [
"header",
"footer",
"sidebar",
"nav",
"menu",
"ad",
"advertisement",
"cookie-banner",
"popup",
"social",
"breadcrumb",
"pagination",
"comment",
"comments",
]

# Remove unwanted elements by ID or class name
for element in elements_to_remove:
for e in soup.find_all(id=element) + soup.find_all(class_=element):
e.decompose()

# Extract text from the remaining HTML tags
text = " ".join(soup.stripped_strings)

return text
28 changes: 28 additions & 0 deletions examples/chat/basic/anthropic/app.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
from anthropic import AsyncAnthropic

from shiny.express import ui

ui.page_opts(
title="Hello Anthropic Claude Chat",
fillable=True,
fillable_mobile=True,
)

# Create and display empty chat
chat = ui.Chat(id="chat")
chat.ui()

# Create the LLM client (assumes ANTHROPIC_API_KEY is set in the environment)
client = AsyncAnthropic()


# On user submit, generate and append a response
@chat.on_user_submit
async def _():
response = await client.messages.create(
model="claude-3-opus-20240229",
messages=chat.get_messages(),
stream=True,
max_tokens=1000,
)
await chat.append_message_stream(response)
37 changes: 37 additions & 0 deletions examples/chat/basic/gemini/app.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
from google.generativeai import GenerativeModel

from shiny.express import ui

ui.page_opts(
title="Hello Google Gemini Chat",
fillable=True,
fillable_mobile=True,
)

# Create and display empty chat
chat = ui.Chat(id="chat")
chat.ui()

# create an LLM client
client = GenerativeModel()


# on user submit, generate and append a response
@chat.on_user_submit
async def _():
messages = chat.get_messages()

# Convert messages to the format expected by Google's API
contents = [
{
"role": "model" if x["role"] == "assistant" else x["role"],
"parts": x["content"],
}
for x in messages
]

response = client.generate_content(
contents=contents,
stream=True,
)
await chat.append_message_stream(response)
36 changes: 36 additions & 0 deletions examples/chat/basic/langchain/app.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
from langchain_openai import ChatOpenAI

from shiny.express import ui

ui.page_opts(
title="Hello LangChain Chat Models",
fillable=True,
fillable_mobile=True,
)

# Create and display an empty chat UI
chat = ui.Chat(id="chat")
chat.ui()

# Create the chat model
llm = ChatOpenAI()

# --------------------------------------------------------------------
# To use a different model, replace the line above with any model that subclasses
# langchain's BaseChatModel. For example, to use Anthropic:
# from langchain_anthropic import ChatAnthropic
# llm = ChatAnthropic(model="claude-3-sonnet-20240229")
# For more information, see the langchain documentation.
# https://python.langchain.com/v0.1/docs/modules/model_io/chat/quick_start/
# --------------------------------------------------------------------


# Define a callback to run when the user submits a message
@chat.on_user_submit
async def _():
# Get all the messages currently in the chat
messages = chat.get_messages()
# Create an async generator from the messages
stream = llm.astream(messages)
# Append the response stream into the chat
await chat.append_message_stream(stream)
24 changes: 24 additions & 0 deletions examples/chat/basic/ollama/app.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
import ollama

from shiny.express import ui

ui.page_opts(
title="Hello Ollama Chat",
fillable=True,
fillable_mobile=True,
)

# Create and display empty chat
chat = ui.Chat(id="chat")
chat.ui()


# on user submit, generate and append a response
@chat.on_user_submit
async def _():
response = ollama.chat(
model="llama3",
messages=chat.get_messages(),
stream=True,
)
await chat.append_message_stream(response)
36 changes: 36 additions & 0 deletions examples/chat/basic/openai/app.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
# pyright: basic
from openai import AsyncOpenAI

from shiny.express import ui

ui.page_opts(
title="Hello OpenAI Chat",
fillable=True,
fillable_mobile=True,
)

# Create a chat instance, with an initial message
chat = ui.Chat(
id="chat",
messages=[
{"content": "Hello! How can I help you today?", "role": "assistant"},
],
# assistant_response_transformer=lambda x: HTML(f"<h1>{x}</h1"),
)

# Display the chat
chat.ui()

# Create the LLM client (assumes OPENAI_API_KEY is set in the environment)
cpsievert marked this conversation as resolved.
Show resolved Hide resolved
client = AsyncOpenAI()


# on user submit, generate and append a response
@chat.on_user_submit
async def _():
response = await client.chat.completions.create(
model="gpt-4o",
messages=chat.get_messages(),
stream=True,
)
await chat.append_message_stream(response)
Loading
Loading