Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Creating ChatMessages is unreasonably slow #6064

Closed
EllenOrange opened this issue Dec 16, 2023 · 11 comments
Closed

Creating ChatMessages is unreasonably slow #6064

EllenOrange opened this issue Dec 16, 2023 · 11 comments
Labels
type: enhancement Minor feature or improvement to an existing feature
Milestone

Comments

@EllenOrange
Copy link

EllenOrange commented Dec 16, 2023

ALL software version info

WSL Ubuntu
Python 3.11.6
Panel 1.3.4
Bokeh 3.3.2

Description of expected behavior and the observed behavior

It seems as though creating a pn.chat.ChatMessage takes ~0.3 seconds per message (!). This means if I load a history with 100 messages, the load time goes up by nearly 30 seconds, which makes for an unreasonably slow development experience.

In the example code I create the objects manually and then set the .objects on the ChatFeed, but I've verified that this also occurs if I send the messages one by one.

It's very possible I'm missing something simple here, but I've followed the available instructions as closely as I can manage.

Complete, minimal, self-contained example code that reproduces the issue

import panel as pn
pn.extension(template="fast")
from pydantic import BaseModel
from typing import List
import time

class HistoryMessage(BaseModel):
    role: str
    content: str

number_of_messages = 100
history_messages = [HistoryMessage(role='user', content='Test string') for _ in range(number_of_messages)]

start_time = time.perf_counter()
chatmessages = [pn.chat.ChatMessage(hm.content, user=hm.role) for hm in history_messages]
end_time = time.perf_counter()

execution_time = end_time - start_time
print(f"Create {number_of_messages} messages = {execution_time} seconds")

chatfeed = pn.chat.ChatFeed()
chatfeed.objects = chatmessages

chatfeed.servable()

Screenshots or screencasts of the bug in action

(test_app) elean@DESKTOP-290LANC:~/perforce/mainline/test_app$ cd /home/elean/perforce/mainline/test_app ; /usr/bin/env /home/elean/miniforge3/envs/test_app/bin/python /home/elean/.vscode-server/extensions/ms-python.python-2023.22.1/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher 42127 -- -m panel serve app.py --show --autoreload
Create 100 messages = 29.158785282983445 seconds
2023-12-16 07:56:31,763 Starting Bokeh server version 3.3.2 (running on Tornado 6.4)
2023-12-16 07:56:31,764 User authentication hooks NOT provided (default user enabled)
2023-12-16 07:56:31,766 Bokeh app running at: http://localhost:5006/app
2023-12-16 07:56:31,766 Starting Bokeh server with process id: 612354

For completeness, I just ran the same case outside of debugpy and got:
$ panel serve app.py --show --autoreload
Create 100 messages = 16.05742013201234 seconds
2023-12-16 08:10:38,523 Starting Bokeh server version 3.3.2 (running on Tornado 6.4)
2023-12-16 08:10:38,523 User authentication hooks NOT provided (default user enabled)
2023-12-16 08:10:38,524 Bokeh app running at: http://localhost:5006/app
2023-12-16 08:10:38,524 Starting Bokeh server with process id: 615103

Which is faster but still pretty unreasonable. :)

@philippjfr philippjfr added the type: enhancement Minor feature or improvement to an existing feature label Dec 17, 2023
@philippjfr philippjfr added this to the v1.3.5 milestone Dec 17, 2023
@philippjfr
Copy link
Member

philippjfr commented Dec 17, 2023

@EllenOrange Agreed, totally unreasonable and I think I can squeeze out a 2-3x speed improvement pretty easily and then eventually work on a singular ChatMessage bokeh model which should make it possible to give us a 10x or more improvement.

@EllenOrange
Copy link
Author

@philippjfr Sweet!

If you have time, I'm curious what the root cause(s) are for this?

For example, would would using a markdown pane and updating the string when new messages come in be a performant work around?

@philippjfr
Copy link
Member

I wasn't quite able to reproduce your timings, locally I'm seeing something closer to about 4 seconds and have been able to get that down to ~1.3 seconds.

@EllenOrange
Copy link
Author

Nice!

Also weird; my laptop is quite zippy. Maybe this touches on a way that WSL is slower than native Linux?

In any case, thanks for looking into the issue. :)

@philippjfr
Copy link
Member

Applied some optimizations in param since a lot of the really hot codepaths were there: holoviz/param#893

PR to optimize ChatMessage itself to follow shortly.

@ahuang11
Copy link
Contributor

wasn't quite able to reproduce your timings

I suspect that Philipp was using main, which included #6034 and optimized ChatReactionIcons used in ChatMessage

@ahuang11
Copy link
Contributor

ahuang11 commented Dec 18, 2023

Using panel==1.3.0

2023-12-17 17:08:23,218 Bokeh app running at: http://localhost:5006/test
2023-12-17 17:08:23,218 Starting Bokeh server with process id: 21078
Create 100 messages = 29.856512791011482 seconds
2023-12-17 17:08:56,723 WebSocket connection opened
2023-12-17 17:08:56,724 ServerConnection created

After pip install panel==1.3.5rc1, it's significantly faster (more than 10x). I wonder if it's because it was hitting the broken tabler icon link and kept retrying synchronously so it slowed it down a lot?

2023-12-17 17:09:55,191 Starting Bokeh server version 3.3.0 (running on Tornado 6.2)
2023-12-17 17:09:55,191 User authentication hooks NOT provided (default user enabled)
2023-12-17 17:09:55,193 Bokeh app running at: http://localhost:5006/test
2023-12-17 17:09:55,193 Starting Bokeh server with process id: 21206
Create 100 messages = 2.299493500031531 seconds
2023-12-17 17:10:07,007 WebSocket connection opened
2023-12-17 17:10:07,010 ServerConnection created

@philippjfr
Copy link
Member

Were we fetching those in Python?

@philippjfr
Copy link
Member

Anyway, that's good news, we can claim a near 30x speedup over 1.3.4 :)

@philippjfr
Copy link
Member

You were correct @ahuang11, it was a problem with fetching the icons. Will wrap this PR up and then release.

@philippjfr
Copy link
Member

Will close, there's perhaps further optimizations to make but for now we've gone from ~30s -> 2.3s (1.3.5rc1) -> 1.3s (on current main).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type: enhancement Minor feature or improvement to an existing feature
Projects
None yet
Development

No branches or pull requests

3 participants