-
Notifications
You must be signed in to change notification settings - Fork 681
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable OpenAI model streaming and fix num_tokens_from_messages
#184
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A library can NOT use print
(and input
) as its core functionality. RolePlaying step should be split into single steps and the step should return an iterator that yields words or chunks as they are received as deltas. All printing should be done in user code: examples, apps etc. Even though CAMEL is currently using print
and input
we are going to move away from this to a better API. I suggest to not merge this.
@Obs01ete Several questions,
|
@zechengz thanks for your comments. We are not going to support terminal prints to display the answers starting from the nearest future. Most likely we will rework ChatMessage into a future object supporting .get() and a chunk generator for streaming mode. |
ChatAgent
num_tokens_from_messages
@Obs01ete I have updated the PR. For now I still parse the streaming output same as that batch output. Other modification please refer to the PR description. Please take another round of review if possible. Thanks in advance! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
encoding = get_model_encoding(self.model.value_for_tiktoken) | ||
completion_tokens = 0 | ||
for message in output_messages: | ||
completion_tokens += len(encoding.encode(message.content)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the difference between this and num_tokens_from_messages
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems that openai usage for completion tokens does not include tokens_per_message
etc in the num_tokens_from_messages
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! Thanks @zechengz
Description
stream
for OpenAI model and handle stream mode inChatAgent
translator
examplenum_tokens_from_messages
Following is an example for
translator
for a simple input json file
Motivation and Context
Parts of #178
Some changes for future PRs:
Types of changes
What types of changes does your code introduce? Put an
x
in all the boxes that apply:Checklist