Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for using inline mode in private conversations #41

Closed
kithawk opened this issue Mar 7, 2023 · 10 comments · Fixed by #101
Closed

Add support for using inline mode in private conversations #41

kithawk opened this issue Mar 7, 2023 · 10 comments · Fixed by #101
Labels
enhancement New feature or request ongoing

Comments

@kithawk
Copy link

kithawk commented Mar 7, 2023

Currently, the bot only supports using inline mode in group chats.
I would like to request a new feature that would allow inline mode to be used in private conversations as well.
This would be useful for users who prefer to interact with the bot one-on-one instead of in a group chat.

@n3d1117
Copy link
Owner

n3d1117 commented Mar 7, 2023

Hi @kithawk, can you elaborate on this use case? Why would you need inline mode in a private chat when you can simply message the bot privately

@kithawk
Copy link
Author

kithawk commented Mar 7, 2023

This would allow you to generate and send response directly in a private conversation with another user (but marked as being generated via the bot, just like when using bots like @vid or @pic)

@n3d1117
Copy link
Owner

n3d1117 commented Mar 7, 2023

Ah, I get what you mean now. I've added experimental support for this in the inline-query-response branch.

Let me know what you think. I've tested it a few times and it seems very slow.

@kithawk
Copy link
Author

kithawk commented Mar 8, 2023

Thanks, it works as expected. However, it appears that the Telegram API for inline queries only provides a response once we stop typing. Therefore, when we pause, ChatGPT sends a response, but if we resume typing and send another message, it makes an additional call to the API, resulting in extra token usage as the API still considers the previous message 'unsent' since ChatGPT has already received it.

For example, if we send @bot hey, we should get a 'hello' response. However, if we pause and then resume typing @bot hey, what's the weather?, we will receive information about no access to the weather. This results in a separate call to the ChatGPT API (using tokens), as well as the API keeping a history of the conversation.

To avoid this issue, one solution could be to not keep history in private inline messages. Another solution could be to send the message to the OpenAI API only if the private inline message ends with something that would explicitly indicate the end of the query, such as @@. This way, only the message @bot hey, what's the weather @@ would be sent to ChatGPT to provide an inline answer.

@n3d1117
Copy link
Owner

n3d1117 commented Mar 8, 2023

Hi @kithawk, I agree that history should not be kept for inline messages.

Does inline query work for you with long outputs (e.g. if you ask tell me a story)? For me it just keeps spinning and never updates. Seems like the response never shows up if the request takes more than a fixed amount of time (e.g. 5s). But i can't find any documentation on this timeout

@kithawk
Copy link
Author

kithawk commented Mar 8, 2023

Hey @n3d1117, same behaviour for me - for longer queries it times out. Not sure if this works, but maybe for answer we do the prompt, and once we get response from API we can edit the message that was already sent (containing just the prompt)

@n3d1117
Copy link
Owner

n3d1117 commented Mar 8, 2023

Hmm not a big fan of this idea. Will keep this issue open in case someone wants to work on it

@n3d1117 n3d1117 added the help wanted Extra attention is needed label Mar 8, 2023
@k3it
Copy link
Contributor

k3it commented Mar 9, 2023

i wonder if the @bold bot can serve as a model for the inline mode. it accepts queries in real-time, but it doesn't execute the query or provide a response until the user clicks one of the presented options. It would be more efficient to hold off on sending anything to the API until after the user confirms the choice by clicking on a pop-up.

This approach would allow for presenting different options for the type of response you want to receive from the API (humor, serious, etc).. similar to "bold", "italic", "fixed" options in the @bold bot.

@n3d1117
Copy link
Owner

n3d1117 commented Mar 9, 2023

@k3it I don't think that's going to work. Here's how inline queries work for these kinds of bots:

async def inline_query(update: Update, context: ContextTypes.DEFAULT_TYPE) -> None:
     query = update.inline_query.query
     results = [
        InlineQueryResultArticle(
            id='some_id',
            title="Bold",
            input_message_content=InputTextMessageContent(
                "**query**", parse_mode=ParseMode.MARKDOWN
            ),
        ),
        # italic, etc...
    ]
    await update.inline_query.answer(results)

There's a predefined InputTextMessageContent based on what you write. There's no way to make an API call after the user selects Bold.

Also, I don't think what @kithawk was proposing is feasible either. I haven't checked, but we don't have access to the message sent after a user clicks on the inline popup, so we can't edit it.

@bugfloyd
Copy link
Contributor

@n3d1117 I believe we can edit inline messages sent by bot. A PR is proposed to implement this feature. Feel free to review and provide suggestions.

@n3d1117 n3d1117 added ongoing enhancement New feature or request and removed help wanted Extra attention is needed labels Apr 1, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request ongoing
Projects
None yet
4 participants