Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bad Request: message text is empty #14

Open
samoonm opened this issue Mar 22, 2023 · 31 comments
Open

Bad Request: message text is empty #14

samoonm opened this issue Mar 22, 2023 · 31 comments
Labels
question Further information is requested

Comments

@samoonm
Copy link

samoonm commented Mar 22, 2023

app_1 | 2023-03-22T01:27:41.154Z ERROR telegpt_core::modules::chat > Failed to handle chat message: A Telegram's error: Bad Request: message text is empty

@unixzii
Copy link
Member

unixzii commented Mar 22, 2023

Which deployment method did you use? And could you paste your configuration (remember to hide the API keys) here?

@unixzii unixzii added the question Further information is requested label Mar 22, 2023
@Evergreentreejxc
Copy link

i have the same question ,here is my config

{
    "openaiAPIKey": "sk-AXA",
    "botToken": "6181801",
    "conversationLimit": 500,
    "databasePath": "/telegpt/data/telegpt.sqlite",
    "adminUsernames": [
        "cyandev",
        "withExtendedLifetime"
    ],
    "i18n": {
        "resetPrompt": "Your conversation has been reset."
    }
}

and docker compose file blow

version: "3"

services:
  app:
    image: ghcr.io/icystudio/telegpt:master
    volumes:
      - ./config.json:/telegpt/config.json
      - ./data:/telegpt/data

If its Necessary,logs:

app_1  |  2023-03-22T14:04:43.346Z INFO  telegpt_core::app > Initializing bot...
app_1  |  2023-03-22T14:04:45.067Z INFO  telegpt_core::app > Bot is started!
app_1  |  2023-03-22T14:04:54.017Z ERROR telegpt_core::modules::stats::stats_mgr > Failed to query usage: Invalid column type Null at index: 0, name: SUM(tokens)
app_1  |  2023-03-22T14:04:54.017Z ERROR telegpt_core::modules::stats::stats_mgr > Failed to query usage: Invalid column type Null at index: 0, name: SUM(tokens)
app_1  |  2023-03-22T14:12:22.696Z ERROR telegpt_core::modules::chat             > Failed to handle chat message: A Telegram's error: Bad Request: message text is empty

@Voldeemort
Copy link

Same problem

@unixzii
Copy link
Member

unixzii commented Mar 23, 2023

@Evergreentreejxc Did you face the issue every time you send a message to the bot?

@Evergreentreejxc
Copy link

Evergreentreejxc commented Mar 23, 2023

@unixzii yes

@unixzii
Copy link
Member

unixzii commented Mar 23, 2023

Could you try calling OpenAI API directly to see if you can get the correct responses?

@Voldeemort
Copy link

Could you try calling OpenAI API directly to see if you can get the correct responses?

I also encountered this problem, and the API works normally in other programs.

@Evergreentreejxc
Copy link

Could you try calling OpenAI API directly to see if you can get the correct responses?

I also encountered this problem, and the API works normally in other programs.

same as me

@Voldeemort
Copy link

Could you try calling OpenAI API directly to see if you can get the correct responses?

I also encountered this problem, and the API works normally in other programs.

same as me

🙈I just compiled a version myself to run directly, but I encountered the same issue.

@unixzii
Copy link
Member

unixzii commented Mar 24, 2023

@Voldeemort Did you see "Thinking..." prompt replied after sending a message to the bot, or did the bot literally not respond at all?

@Evergreentreejxc
Copy link

@Voldeemort Did you see "Thinking..." prompt replied after sending a message to the bot, or did the bot literally not respond at all?

I saw thinking content

@Voldeemort
Copy link

@Voldeemort Did you see "Thinking..." prompt replied after sending a message to the bot, or did the bot literally not respond at all?

Yes, I saw the 'Thinking...' prompt

@unixzii
Copy link
Member

unixzii commented Mar 24, 2023

It seems that Telegram APIs are working, but something is going wrong with OpenAI responses (or parsing of them). Would you mind trying the latest prebuilt binaries?

@Voldeemort
Copy link

It seems that Telegram APIs are working, but something is going wrong with OpenAI responses (or parsing of them). Would you mind trying the latest prebuilt binaries?

Is it the 0.1.1 version from 21 hours prior to the build?

@unixzii
Copy link
Member

unixzii commented Mar 24, 2023

It seems that Telegram APIs are working, but something is going wrong with OpenAI responses (or parsing of them). Would you mind trying the latest prebuilt binaries?

Is it the 0.1.1 version from 21 hours prior to the build?

Yes, technically it should have nothing to do with this issue, but it's still worth a try.

@Voldeemort
Copy link

It seems that Telegram APIs are working, but something is going wrong with OpenAI responses (or parsing of them). Would you mind trying the latest prebuilt binaries?

Is it the 0.1.1 version from 21 hours prior to the build?

Yes, technically it should have nothing to do with this issue, but it's still worth a try.

I still got this error when I used the 0.1.1 version you built on my Mac:

2023-03-24T12:50:18.944Z INFO telegpt_core::app > Initializing bot...
2023-03-24T12:50:21.023Z INFO telegpt_core::app > Bot is started!
2023-03-24T12:50:25.426Z ERROR telegpt_core::modules::chat > Failed to handle chat message: A Telegram's error: Bad Request: message text is empty
2023-03-24T12:50:38.371Z ERROR telegpt_core::modules::chat > Failed to handle chat message: A Telegram's error: Bad Request: message text is empty
2023-03-24T12:50:45.342Z ERROR telegpt_core::modules::chat > Failed to handle chat message: A Telegram's error: Bad Request: message text is empty
2023-03-24T12:50:47.538Z ERROR telegpt_core::modules::chat > Failed to handle chat message: A Telegram's error: Bad Request: message text is empty

@unixzii
Copy link
Member

unixzii commented Mar 24, 2023

Got no ideas on this. 😅

But I still think it's probably a network problem, I need some network packet captures for further diagnostics.

@unixzii
Copy link
Member

unixzii commented Mar 24, 2023

You can add RUST_LOG=TRACE environment variable to get more verbose logs.

@Voldeemort
Copy link

You can add RUST_LOG=TRACE environment variable to get more verbose logs.

maybe i can show you my server username and password, i get the same problem running the program on both debian and macos, as did the other person... hmmm...

@unixzii
Copy link
Member

unixzii commented Mar 24, 2023

You can add RUST_LOG=TRACE environment variable to get more verbose logs.

maybe i can show you my server username and password, i get the same problem running the program on both debian and macos, as did the other person... hmmm...

I will suggest you not to share your server credentials with others, as the necessary logs are enough for our discussion.

@Voldeemort
Copy link

You can add RUST_LOG=TRACE environment variable to get more verbose logs.

maybe i can show you my server username and password, i get the same problem running the program on both debian and macos, as did the other person... hmmm...

I will suggest you not to share your server credentials with others, as the necessary logs are enough for our discussion.

TRACE want > poll_want: taker wants!
TRACE want > signal: Want
TRACE want > signal: Want
TRACE want > signal: Want
DEBUG telegpt_core::dispatcher > mytelegram sent a message: Hi
DEBUG reqwest::connect > starting new connection: https://api.telegram.org/
TRACE mio::poll > deregistering event source from poller
TRACE want > signal: Closed
TRACE mio::poll > registering event source with poller: token=Token(16777217), interests=READABLE | WRITABLE
TRACE want > signal: Want
TRACE want > signal found waiting giver, notifying
TRACE want > poll_want: taker wants!
TRACE want > signal: Want
TRACE want > signal: Want
TRACE want > signal: Want
DEBUG reqwest::connect > starting new connection: https://api.openai.com/
TRACE mio::poll > registering event source with poller: token=Token(33554434), interests=READABLE | WRITABLE
TRACE want > signal: Want
TRACE want > signal found waiting giver, notifying
TRACE want > poll_want: taker wants!
TRACE want > signal: Want
TRACE want > signal: Want
TRACE mio::poll > deregistering event source from poller
TRACE want > signal: Closed
TRACE want > signal: Want
TRACE want > signal: Want
TRACE want > signal: Want
TRACE want > signal: Want
ERROR telegpt_core::modules::chat > Failed to handle chat message: A Telegram's error: Bad Request: message text is empty

@Evergreentreejxc
Copy link

@unixzii Hi,i found something interesting,
I deployed a telegram bot by another repository https://github.com/karfly/chatgpt_telegram_bot ,but i met the same error when i send Chinese to bot Screenshot_2023-03-25-20-32-09-654_org.telegram.plus-edit.jpg

So i thought that's something wrong with telegram api,or the way you guys call the interface.
Anyway, appreciate your work ☺️

@unixzii
Copy link
Member

unixzii commented Mar 25, 2023

Sorry for the inconvenience, but I cannot reproduce your issue on my machines (both Linux and macOS). If it's possible, you can add some logs at src/modules/chat/mod.rs:222:

async fn actually_handle_chat_message(...) {
  // ...
  let result = stream_model_result(
      &bot,
      &chat_id,
      &sent_progress_msg,
      progress_bar,
      msgs,
      openai_client,
      &config,
  )
  .await;

  // Add log message here:
  println!("OpenAI response: {:#?}", result);

  // ...
}

So you can see what OpenAI actually returns.

@Voldeemort
Copy link

@unixzii Hi,i found something interesting,
I deployed a telegram bot by another repository https://github.com/karfly/chatgpt_telegram_bot ,but i met the same error when i send Chinese to bot Screenshot_2023-03-25-20-32-09-654_org.telegram.plus-edit.jpg

So i thought that's something wrong with telegram api,or the way you guys call the interface.
Anyway, appreciate your work ☺️

This question is strange, I have tried https://github.com/m1guelpf/chatgpt-telegram and it works fine.

@Voldeemort
Copy link

Sorry for the inconvenience, but I cannot reproduce your issue on my machines (both Linux and macOS). If it's possible, you can add some logs at src/modules/chat/mod.rs:222:

async fn actually_handle_chat_message(...) {
// ...
let result = stream_model_result(
&bot,
&chat_id,
&sent_progress_msg,
progress_bar,
msgs,
openai_client,
&config,
)
.await;

// Add log message here:
println!("OpenAI response: {:#?}", result);

// ...
}
So you can see what OpenAI actually returns.

Thank you for your reply, I will try again.

@Voldeemort
Copy link

Sorry for the inconvenience, but I cannot reproduce your issue on my machines (both Linux and macOS). If it's possible, you can add some logs at src/modules/chat/mod.rs:222:

async fn actually_handle_chat_message(...) {
  // ...
  let result = stream_model_result(
      &bot,
      &chat_id,
      &sent_progress_msg,
      progress_bar,
      msgs,
      openai_client,
      &config,
  )
  .await;

  // Add log message here:
  println!("OpenAI response: {:#?}", result);

  // ...
}

So you can see what OpenAI actually returns.

Hi,unixzii,The following logs have been printed, please have a look
DEBUG telegpt_core::dispatcher > myusername sent a message: 你好
TRACE want > signal: Want
TRACE want > signal: Want
DEBUG reqwest::connect > starting new connection: https://api.openai.com/
TRACE want > signal: Want
TRACE mio::poll > registering event source with poller: token=Token(16777218), interests=READABLE | WRITABLE
TRACE want > signal: Want
TRACE want > signal found waiting giver, notifying
TRACE want > poll_want: taker wants!
TRACE want > signal: Want
TRACE want > signal: Want
TRACE mio::poll > deregistering event source from poller
TRACE want > signal: Closed
TRACE want > signal: Want
TRACE want > signal: Want
OpenAI response: Ok(
ChatModelResult {
content: "",
token_usage: 8,
},
)
TRACE want > signal: Want
TRACE want > signal: Want
ERROR telegpt_core::modules::chat > Failed to handle chat message: A Telegram's error: Bad Request: message text is empty

@Voldeemort
Copy link

Voldeemort commented Mar 25, 2023

Sorry for the inconvenience, but I cannot reproduce your issue on my machines (both Linux and macOS). If it's possible, you can add some logs at src/modules/chat/mod.rs:222:

async fn actually_handle_chat_message(...) {
  // ...
  let result = stream_model_result(
      &bot,
      &chat_id,
      &sent_progress_msg,
      progress_bar,
      msgs,
      openai_client,
      &config,
  )
  .await;

  // Add log message here:
  println!("OpenAI response: {:#?}", result);

  // ...
}

So you can see what OpenAI actually returns.

By the way, I have tried creating a new bot, but the same problem persists. From the feedback in the logs, it seems that OpenAI is returning a blank message to me.

@Voldeemort
Copy link

Voldeemort commented Mar 25, 2023

Sorry for the inconvenience, but I cannot reproduce your issue on my machines (both Linux and macOS). If it's possible, you can add some logs at src/modules/chat/mod.rs:222:

async fn actually_handle_chat_message(...) {
  // ...
  let result = stream_model_result(
      &bot,
      &chat_id,
      &sent_progress_msg,
      progress_bar,
      msgs,
      openai_client,
      &config,
  )
  .await;

  // Add log message here:
  println!("OpenAI response: {:#?}", result);

  // ...
}

So you can see what OpenAI actually returns.

Hi, @unixzii ,I think I have found a problem after capturing packets, and the following is the corresponding JSON data of the OpenAI API endpoint, "https://api.openai.com/v1/chat/completions"

{
"error": {
"param":"messages",
"message":"This model's maximum context length is 4097 tokens. However, you requested 4108 tokens (12 in the messages, 4096 in the completion). Please reduce the length of the messages or completion.",
"code":"context_length_exceeded",
"type":"invalid_request_error"
}
}

After discovering this issue, I tried modifying 4096 to 2048 in src/modules/openai.rs:35, and it worked as normal. I'm not sure what the problem is, but I'm sure that I only sent "hi" to the Telegram bot.

@Voldeemort
Copy link

You can refer to it.

use std::pin::Pin;
use std::sync::Arc;

use anyhow::Error;
use async_openai::types::{ChatCompletionRequestMessage, CreateChatCompletionRequestArgs};
use async_openai::Client;
use futures::{future, Stream, StreamExt};
use teloxide::dptree::di::{DependencyMap, DependencySupplier};

use crate::{config::SharedConfig, module_mgr::Module};

pub(crate) type ChatModelStream = Pin<Box<dyn Stream<Item = ChatModelResult> + Send>>;

#[derive(Clone, Debug, Default, Eq, PartialEq)]
pub(crate) struct ChatModelResult {
    pub content: String,
    pub token_usage: u32,
}

#[derive(Clone)]
pub(crate) struct OpenAIClient {
    client: Client,
    config: SharedConfig,
}

impl OpenAIClient {
    pub(crate) async fn request_chat_model(
        &self,
        msgs: Vec<ChatCompletionRequestMessage>,
    ) -> Result<ChatModelStream, Error> {
        let client = &self.client;
        let max_tokens = self.config.max_tokens.unwrap_or(2048).min(4096); // Set the maximum value to 4096
        let req = CreateChatCompletionRequestArgs::default()
            .model("gpt-3.5-turbo")
            .temperature(0.6)
            .max_tokens(max_tokens)
            .messages(msgs)
            .build()?;

        let stream = client.chat().create_stream(req).await?;
        Ok(stream
            .scan(ChatModelResult::default(), |acc, cur| {
                let content = cur
                    .as_ref()
                    .ok()
                    .and_then(|resp| resp.choices.first())
                    .and_then(|choice| choice.delta.content.as_ref());
                if let Some(content) = content {
                    acc.content.push_str(content);
                }
                future::ready(Some(acc.clone()))
            })
            .boxed())
    }

    pub(crate) fn estimate_prompt_tokens(&self, msgs: &Vec<ChatCompletionRequestMessage>) -> u32 {
        let mut text_len = 0;
        for msg in msgs {
            text_len += msg.content.len();
        }
        ((text_len as f64) * 1.4) as _
    }

    pub(crate) fn estimate_tokens(&self, text: &str) -> u32 {
        let text_len = text.len();
        ((text_len as f64) * 1.4) as _
    }
}

pub(crate) struct OpenAI;

#[async_trait]
impl Module for OpenAI {
    async fn register_dependency(&mut self, dep_map: &mut DependencyMap) -> Result<(), Error> {
        let config: Arc<SharedConfig> = dep_map.get();

        let openai_client = OpenAIClient {
            client: Client::new().with_api_key(&config.openai_api_key),
            config: config.as_ref().clone(),
        };
        dep_map.insert(openai_client);

        Ok(())
    }
}

@unixzii
Copy link
Member

unixzii commented Mar 29, 2023

Hi @Voldeemort, sorry for the late reply. I appreciate your investigation, that's very helpful. The default max token is 4096, however, you can change it in the configuration file. Don't know why the input prompt exceeded the limitation. That's a very strange behavior.

@L-Ryland
Copy link

Thanks for that discover @Voldeemort! Maybe it's a restriction with my OpenAI account site that max_token is too large, change the token value to 800 also works smoothly for me.
I think you can add a "maxTokens": <your_value> in your config.json, which sounds better?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

5 participants