-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Streaming Support for Inline-Query Callback Responses #235
Conversation
text=f'{query}\n\n_{answer_tr}:_\n{content}', | ||
is_inline=True) | ||
except: | ||
continue |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not log?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jvican good question. I have used the logic from the main prompt function, as the base for the inline query streaming.
I think we can add logging to both and also try to combine the semi-duplicated code
chatgpt-telegram-bot/bot/telegram_bot.py
Line 459 in 756a3fe
continue |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 for future logic isolation! Yes we can add a log here no problem, I didn't do that initially because these operations (delete+send initial message) failed very rarely in my testing, and even if they did they will be just retried when the next chunk comes in
backoff = 0 | ||
async for content, tokens in stream_response: | ||
if len(content.strip()) == 0: | ||
continue |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not log?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
and here it is the same as the other comment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No need to log anything here. Sometimes OpenAI stream response will just be whitespaces at the beginning, I simply ignore it and wait for next chunk
Thank you @bugfloyd! Anyway, some notes on the PR:
I've pushed these minor fixes to your PR branch (hope you're ok with it). Looks good to me, let me know what you think. |
@n3d1117 Thanks for the fixes. Based on the current changes, I believe this PR is good to go for now. Additionally, I have some suggestions to improve the code and reduce the size of this class. If you don't get to it first, I plan to propose a PR to implement these changes next week. |
That would be awesome @bugfloyd! |
This PR introduces streaming support for inline query responses when the call to action button is used. While reviewing the code, please consider the following notes:
wrap_with_indicator
function has been modified to accommodate the handling of inline queries.Feel free to provide feedback and suggest any additional enhancements to the implementation.