Replies: 6 comments
-
读一下 self.__add_to_conversation(prompt, "user")
...
...
response = self.session.post(
"https://api.openai.com/v1/chat/completions",
headers={"Authorization": "Bearer " + (api_key or self.api_key)},
json={
"model": self.engine,
"messages": self.conversation,
"stream": True,
# kwargs
"temperature": kwargs.get("temperature", 0.7),
"top_p": kwargs.get("top_p", 1),
"n": kwargs.get("n", 1),
"user": role,
},
stream=True,
)
...
...
self.__add_to_conversation(full_response, response_role) 每次把所有的context一起发 |
Beta Was this translation helpful? Give feedback.
-
不过因为有4000 token的限制,实际会更复杂一些,比如需要调 |
Beta Was this translation helpful? Give feedback.
-
以前应该是有人问过了,就是把你的最新问题和之前问过的问题和服务器给的答复再次提交,就能达到保持上下文的效果。如果数据超过的花,一个折中的办法是,只提交最近的1--3次数据,看总共的字符数多不多,少的话可以多往前找几轮。多的就就最近的1-2次的数据全部提 |
Beta Was this translation helpful? Give feedback.
-
没有会话ID,那自己本地建一个list,把问题和答案一个个放进去就OK啊 |
Beta Was this translation helpful? Give feedback.
-
建立一个空数组,每次询问和回答后将数据填到这个数组中,聊天请求的时候用这个数组就行 |
Beta Was this translation helpful? Give feedback.
-
我现在想问的是,gpt-3.5-turbo的流式传输要如何使用啊?有点搞不懂 |
Beta Was this translation helpful? Give feedback.
-
求指导:使用ChatGPT API如何保持上下文?
看了OpenAI的官方接口(https://platform.openai.com/docs/guides/chat/introduction),并没有像网页版有会话ID
Beta Was this translation helpful? Give feedback.
All reactions