Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] 消息截断机制存在问题 #4443

Closed
1 of 3 tasks
QAbot-zh opened this issue Apr 4, 2024 · 5 comments
Closed
1 of 3 tasks

[Bug] 消息截断机制存在问题 #4443

QAbot-zh opened this issue Apr 4, 2024 · 5 comments
Labels
bug Something isn't working

Comments

@QAbot-zh
Copy link

QAbot-zh commented Apr 4, 2024

Bug Description

Because models like Claude have strict definitions for the roles of messages in the message queue, such as the first message can only be from the system or user, and user and assistant must strictly alternate, the functional errors in the message queue controlled by NextChat were exposed by attaching the number of historical messages and the length of historical messages.
① The minimum unit of attaching historical message count is messages that do not distinguish roles, which means that once a user message at the head of the queue is popped out, the head message of the message queue will be the role of the assistant. A better solution is to use dialogue turns as the smallest unit, if the length exceeds the limit, the head of a dialogue turn should be popped out simultaneously, that is, a group of user and assistant QA messages should be discarded.
② max_tokens was originally the maximum number of replies for large models, but in the program, it is also used to determine whether to compress the message queue, which, on one hand, combined with the situation in ①, causes problems with the message queue, and on the other hand, it also affects the use of long-context models.

由于 claude 等模型对消息队列的角色有严格的定义,比如第一条消息的角色只能是system或者user,user和assistant必须严格交替,所以nextchat原先通过附带历史消息数和历史消息长度来控制消息队列存在的功能性错误被暴露出来。
① 附带历史消息数的最小单位是不区别角色的消息,这就导致了一旦队列头部的user消息被弹出后,消息队列的头部消息会是 assistant 的角色。一个更好的解决方案是以对话轮次为最小单位,如果长度超限,则应该将头部的一轮对话同时弹出,即抛弃一组user和assistant的QA消息。
② max_tokens 本是大模型的最大回复数量,在程序里却也被用来判断是否压缩消息队列,一方面与情况①叠加导致消息队列出问题,另一方面也会影响长上下文模型的使用。

{
  "error": {
    "message": "messages: first message must use the \"user\" role (request id: 2024040416494530153478024102122)",
    "type": "invalid_request_error",
    "param": "",
    "code": null
  }
}

Steps to Reproduce

满足以下条件可能触发该问题:

  1. 调用对消息队列角色要求严格的模型,如claude-3相关模型
  2. 对话内容较长,上下文超过4000token
  3. 对话次数多,超过预设的历史消息数

Expected Behavior

User can effectively utilize a long-context model for extensive historical conversations.

  • Utilize dialogue rounds as the smallest unit of dialogue memory to prevent messages starting with "assistant" in the message queue (as it does not add value to the conversation);
  • Enhance the role of max_tokens in handling historical message queues.

能成功调用长上下文模型进行长历史对话

  • 以对话轮次为对话记忆的最小单位,避免消息队列中出现 assistant 开头的消息(它对于对话其实也起不到什么作用);
  • 优化max_tokens 在历史消息队列处理中的作用。

Screenshots

image

Deployment Method

  • Docker
  • Vercel
  • Server

Desktop OS

No response

Desktop Browser

No response

Desktop Browser Version

No response

Smartphone Device

No response

Smartphone OS

No response

Smartphone Browser

No response

Smartphone Browser Version

No response

Additional Logs

No response

@QAbot-zh QAbot-zh added the bug Something isn't working label Apr 4, 2024
@Issues-translate-bot
Copy link

Bot detected the issue body's language is not English, translate it automatically.


Title: [Bug] There is a problem with the message truncation mechanism

@Dean-YZG
Copy link
Contributor

Dean-YZG commented Apr 8, 2024

we will resolve this problem soon ~~

@Dean-YZG
Copy link
Contributor

Dean-YZG commented Apr 8, 2024

#4457

in this PR, it will be compatible with Claude

@Dean-YZG Dean-YZG closed this as completed Apr 8, 2024
@QAbot-zh
Copy link
Author

QAbot-zh commented Apr 9, 2024

Thank you for PR, but it seems not to solve the message truncation problem. Hope to seriously consider the feasibility of treating a dialogue turn (a set of Q&A) as the smallest unit of the message queue.

@YiFlower
Copy link

YiFlower commented Apr 11, 2024

Thank you very much for the author's attention to this issue. After updating to the latest version and conducting testing, I still frequently encounter this issue.
My testing method: Set max_token to a smaller value (I set it to 200), and after about 4-6 rounds of dialogue, the error can be stably reproduced.
The screenshot of Error is as follows:
image

Claude, as the most powerful AGI in the current model, looks forward to the author's attention to this issue and wishes the author a happy life

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants