-
-
Notifications
You must be signed in to change notification settings - Fork 264
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reconsider llm chatgpt
command and general command design
#17
Comments
I need a top-level design that supports the following:
|
Maybe I need to figure out how the completion APIs like GPT-3 will work within that |
What do you think of keeping
Or a longer version (but maybe better for clarity):
As for the "chat mode", it could be done by using the flag Also, |
For ChatGPT, there's also the context length to consider:
|
I'm going to create:a new default command called UPDATE: No, that doesn't work - because I need all of the various options and arguments on the command to be available on that default command too. So I should stick with |
Changed my mind again - I think I'm going to make that the default command, and have it expose a subset of functionality that I expect to be common across all models - it will accept a prompt and a model and run that prompt. If you need to do something specialized with custom options, you can use |
I want to make streaming mode the default - I'm fed up of forgetting to add |
Here's the set of options for the Lines 32 to 51 in 68c3848
(I just removed the Do these make sense as a set of options for any generic model? Looking at them in turn:
I'm happy with these as the standard set of options. I don't think it's too harmful that some of them won't make sense for every model and should return errors if used incorrectly. |
As part of the templates feature I'll be adding a |
Further work on this will happen here: |
Originally posted by @simonw in #6 (comment)
The text was updated successfully, but these errors were encountered: