-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[context] add support for --additional_context
option for query
subcommand
#37
Comments
query
subcommand--additional_context
option for query
subcommand
When submitting ChatCompletions, this can be stored (after being deformatted with \n and other problem characters removed) under the 'system' tag of the message prompt. With standard completions, we can just prepend prompts with whatever amount of the context that our token count will permit... OR, we can include it in the tokenized body of text... Either way, we will need to add unittests for this. |
When we bootstrap additional_context, I think we need to privilege the other sources of context first. So, the order of precedence is (from highest to lowest):
We have three cases:
In the cases of ChatCompletions, we should prepend the context whatever number of tokens we can add before reading our max token count, after having added all other context to the context as a "system" dictionary, see #31. In the case of Completions with keyword tokenization, we should pass the additional context to In the case of Completions without keyword tokenization, we should prepend the context string with whatever number of tokens we can add before reading our max token count, after having added all other context to the context as a standard string. |
[tests] test additional_context passed to get_context |
We should add an
--additional_context
option to thequery
subcommand that expects either a string or file path, which it will bolt onto the request string (where? at the start? at the end?).This allows users to extend their questions with details that might not typically be formatted like a question, like stack traces with line breaks, etc.
We should then write some logic to structure this context (removing line breaks, etc) and applying length limits and/or keyword tokenization to it before bolting it onto the request string.
The text was updated successfully, but these errors were encountered: